With the rise of AI technology comes the thrill of having the world’s knowledge at your fingertips. This potential, however, also brings with it an inevitable temptation: using AI to generate documents that were once crafted by professionals. One such example is the growing trend of AI-generated wills, which can be created either by individuals nearing the end of their lives or by their family members. Whether it be for reasons of convenience, cost, or simplicity, have you ever considered using AI to create a will for yourself?
The introduction of AI into the global sphere has fundamentally altered the way the world operates. As this technology continues to permeate every facet of our society, it has gradually made its way into the legal field—more and more services in New York have begun to leverage AI to carry out tasks that were once deemed inherently human.
There is nothing more profoundly human than a will. It serves as the final extension of an individual’s existence, one that allows their presence to linger on Earth and transcend the limitations of their physical body. It addresses the things we hold most dear, whether that be the relationships we have with family members, our most cherished possessions, or our aspirations for the future. Its creation requires careful reflection, emotional fortitude, and a sense of togetherness. In short, a will is far more than a simple document—it is, instead, a means of both preserving and immortalizing the life of the deceased.
Using AI to create a will not only leads to significant errors in its content, but also undermines its very purpose and legacy. Convenience should not take precedence during the creation of a will; rather, prioritizing the well-being of the individual who is deceased is key. Choosing to create a will with the guidance of a professional attorney is the respect that you and your loved ones deserve. Listed below are a series of issues that you may come across if you use AI to draft your New York will, regardless of whether you are dealing with a simple or a high-net-worth estate.

The Cold Limitations of AI in Human Judgment
While an AI tool may initially seem sympathetic and understanding, behind its veneer lies a cold, impersonal set of algorithms that are ultimately unable to grasp the nuances present in interpersonal relationships. Consequently, when asked to delegate certain pieces of property or make decisions about who should receive them, relying on AI for such decisions is akin to throwing darts at a wheel to determine the next recipient. This rings especially true in situations involving disputes: an AI is likely to be heavily influenced by superficial data and draw patterns that are non-existent in real life, thus missing the deeper emotional context and unique dynamics used by humans to navigate these situations. An AI’s tendency to make mistakes grows in tandem with the complexity of the situation, which could further exacerbate the difficulty of the situation and lead to even more trouble in the future.
The best way to avoid such situations is to prevent AI from handling moral complexities altogether, whether it be from asset distribution, as mentioned above, or asking it to offer advice on personal dilemmas. A tool that you might consider largely rational and correct can, and largely does, produce skewed outputs at random and often proves unreliable for providing sound advice. Only a fellow human can truly understand the turmoil of another. ChatGPT or BingChat certainly cannot.
The Dangers of AI’s Vague Language
One of the most crucial aspects of drafting a will is the language used: every word, clause, and legal term must be carefully chosen to reflect the testator’s precise intentions. Even the slightest variation in phrasing can completely pivot the direction of how an estate is distributed; any potential cases of ambiguity can make a case for potential feuds between the family in the future.
AI chatbot writing has gained a reputation for being vague and generalized, often lacking the nuance and specificity required for tasks like drafting a will. Much like its difficulty in fully understanding complex emotions above, an AI’s inability to produce precise, context-sensitive language makes it an unreliable tool, especially when you are creating legally binding documents. The level of certainty, clarity, and complexity needed in a will is precisely what AI tends to fall short of, making it a definite tool to steer clear of when hoping to create one.
How AI Can Spread Misinformation in Legal Documents
Studies have shown that AI can heavily distort information, even when drawing from sources uploaded by the individual themselves. BBC researchers have found that AI often struggles to discern sources, summarize text, and extract direct quotes. Perhaps the most concerning aspect of its outputs, however, is AI’s tendency to generate fiction—making up dates, numbers, facts, rules, legislation, and more—without any indication to the user that the information is fabricated. Many users simply assume this information is accurate because the AI claims to draw from existing sources on the web.
This poses significant problems when creating a will. AI may unintentionally include factually incorrect clauses, which could result in the document being rejected in probate. It may reference non-existent legal rules or present unenforceable terms. AI could also misrepresent personal situations, disregarding the specific wishes of the individual or the final words of a loved one. Potential errors that would be either weeded out—or completely avoided—if the document were to be drafted by a professional could be the cause for a will’s inability to be fully executed.
Furthermore, even in the rare instances when the information is entirely accurate, the sources from which AI draws its data are often outdated and not regularly updated. Ultimately, it’s strongly advised to avoid using AI to create highly specific and sensitive documents—documents where accuracy and legal validity are paramount. Not only are humans much better at catching mistakes, but professionals have an in-depth understanding of the law, unlike AI algorithms that often struggle to interpret the information entirely.
Privacy Risks When Using AI for Sensitive Information
Once entered into an AI program, the information used becomes part of the public domain. Many AI tools also rely on personal data to function and improve their algorithms, often storing this information for future use. How would you feel if an AI bot used your personal situation, or the will of a loved one, to enhance its technology or platform? Not only is this an inappropriate breach of privacy, but it also greatly undermines the sanctity of an individual’s final wishes after their death.
The idea that your data could be used in this manner raises significant concerns. Sensitive information should never be inputted into any system, regardless of the company’s location, its claims, or any assurances they offer. Data breaches can jeopardize your livelihood if that information is leaked; furthermore, it’s impossible to fully trust how companies are handling your data. Client-attorney confidentiality can only be guaranteed when a licensed attorney is directly involved in the drafting.
The Jurisdictional Pitfalls of AI-Generated Wills
Finally, due to the fact that AI tools are designed to assist a broad range of individuals, the information they provide is often generalized and fails to account for state-specific requirements. Each state has its own unique rules regarding wills, and these differences are frequently overlooked by AI tools and individuals without the proper knowledge to administer them. Even small deviations in the format of a will—such as those based on another state’s guidelines—could render the entire document invalid in court. Without professional guidance, it’s difficult to ensure that a will meets the legal standards specific to New York.
The Real-World Dangers of Using AI to Draft a Will
It might be easier to visualize these issues in the form of a story, so here is an example.
Your grandmother hoped to pass on her extensive collection of artwork to her descendants. Hearing about this potential inheritance, your estranged uncle suddenly re-enters the family and claims precedence over his nieces and nephews, arguing that direct heirs should receive more of the collection, as opposed to grandsons and granddaughters. Resentment builds as he insists on receiving a share of the art, despite having severed all family ties years ago and never once reaching out to wish his mother a happy birthday or celebrate any major holidays.
The situation becomes dire when it becomes clear that time is running out to finalize her will. In haste, you and your family turn to ChatGPT, hoping its apparent rationality, logical thinking, and accessibility will help smooth over the complexities of the will before your grandmother passes. Your grandmother reads over the will quickly and approves of it, hoping to avoid any more animosity before her death. The rest of the family seems fine with it as well. She passes away the following week.
Now, it is finally time to execute it. You decide to schedule an appointment with Inna Fershteyn, one of the top-rated estate planning attorneys in Brooklyn. Upon your meeting, however, to your dismay, she immediately finds several issues that can’t be fixed. For one, ChatGPT made up several pieces of artwork that didn’t exist in your grandmother’s collection: for example, the will claims your mother will inherit Cloudy Night over the Seine by Van Gogh, but you later find out it’s not a real painting. There are other mistakes too: ChatGPT misidentifies Shot Marilyns by Andy Warhol as a print, even though it’s a painting; assigns certain pieces to your “aunt,” despite you having only uncles; and completely fabricates details about your family members and their relationships. It even spells the name of one of your cousins wrong.
Further complicating these matters is your meddling uncle. As the art collection was divided among the family by artistic movement, he had resolved to take in all the pieces that were left behind and could not be categorized in the determined movements. Consequently, while the will had specified that you would inherit “all her collection of Cubist paintings,” your uncle, who claims to have taken an art history class online, argues that Les Demoiselles d'Avignon by Picasso, which happened to be your favorite piece, should go to him. He contends that the piece, which sits on the edge of Cubism, is more aligned with Primitivism and therefore, belongs to him. Frustrations escalated as he began using the same argument for other works, and the vague language in the will made it nearly impossible to resolve these disputes. On top of this, you’re torn because you know your grandmother wouldn’t want him to receive more than his fair share, especially given his ungratefulness and the fractured family bond.
Finally, you discover that parts of the will are completely invalid because they were copied by hand from ChatGPT by your grandmother herself—the AI tool had mistakenly based the holographic will standard on New Jersey’s regulations, rather than those of New York. And to make matters worse, news breaks of a sudden data breach at OpenAI that exposed the personal information of millions of users to malicious sources. You realize with horror that, while drafting the will, you entered sensitive details about the contents of the will and your family, and you now don’t know what is at risk.
Your grandmother’s passing has closed the door to any further changes. The thought of altering anything now feels like a betrayal; but as you reflect, you begin to question whether you ever truly honored her wishes to begin with. Instead of taking the time for thoughtful conversations with your family, you chose to plug your issues into a machine and let it spit out a solution. It overlooked the complexities of your relationships, missed crucial details, and led to dissatisfaction and confusion. Ultimately, you end up realizing that you made a huge mistake, one that can never be truly resolved.
Why Estate Planning Shouldn’t Rely on AI
The issue with AI-generated wills isn’t whether you can use AI to create one; technically, anyone can. The real question is whether you should. While the convenience of AI tools may seem appealing, the reality is that relying on AI to draft something as significant as a will is a risky choice that can create more problems than it solves. From privacy breaches and misinformation to outdated or incorrect legal guidance, the supposed “benefits” of AI become irrelevant when the end product is terribly flawed.
Beyond the technical pitfalls, it’s also important to reflect on the deeper significance of a will, as it is not merely a simple document written to meet a school deadline or fulfill a workplace task. Instead, a will is a deeply personal and legally binding expression of your family history, tasked with carrying on this legacy for generations to come. Trusting this responsibility to a machine ultimately undermines the serene dignity and gravity of the process, perhaps even that of humanity itself.
For that reason, it’s strongly recommended to begin your will-drafting process in New York with the guidance of a qualified professional who understands both the legal intricacies and the unique needs of your family. Navigating the complexities of estate planning without expert advice can lead to unintended consequences and family disputes. Contact the Law Office of Inna Fershteyn at (718) 333–2394 for experienced, compassionate assistance in creating a will that honors your intentions and protects your loved ones.
