Artificial Intelligence

AI Startups and the Legal Risk of Getting It Wrong

Artificial intelligence (AI) is exploding. There were 5,509 AI startups in the US between 2013 and 2023. And according to Statista, they’re receiving a massive amount of funding.

In 2024, AI startups received more than $0.5 trillion and raised over $100 billion. “

In 2024, AI startups received more than $0.5 trillion and raised over $100 billion.

Despite the success of AI, there are still so many gray areas and legal risks. Companies and the language learning models (LLMs) they’re producing can and definitely do make errors. One study found that LLMs are incorrect 60% of the time. And it isn’t even LLMs like ChatGPT; it’s AI-controlled chatbots, finance systems, booking systems, and supply chain control systems. The scope of AI is already more than we could have ever imagined.

The legal risks of getting it wrong almost feel more pertinent than human error. Read on to find out more.

How AI Gets It Wrong

The way AI makes mistakes isn’t like humans. When we get it wrong, it’s usually obvious. A miscalculation, incorrect dates, a typo. AI’s errors are trickier. They’re confident, authoritative, and buried in what looks like accuracy. Known as hallucinations, they happen across all AI systems.

Think about an AI travel booking system that fabricates a non-existent flight. Or, more commonly lately, booking systems that overbook a flight. Or a finance tool that confidently produces a figure off by billions. Or an HR chatbot giving incorrect legal advice to employees.

Training data is another culprit. Bias, outdated information, or a lack of context means AI models learn flawed lessons. An AI built on skewed data will produce skewed results.

Then add complexity. Many AI systems are black boxes. Even developers don’t fully understand how outputs are generated.

The Common Risks of AI

The risks cover every layer of business operations. Here are some of the most common risks:

  • Data privacy breaches. AI eats data, but feeding it sensitive information without proper controls can break laws such as GDPR or CCPA. Feeding a chatbot sensitive medical records for analysis? That’s a compliance nightmare if a patient finds out.
  • Bias and discrimination. From hiring tools screening candidates unfairly to financial services denying loans based on skewed data, AI can replicate systemic biases. These turn into discrimination claims fast.
  • Intellectual property (IP) issues. If an AI generated content based on copyrighted training data, who owns the result? And who gets sued if the output infringes on someone else’s IP? Courts are still working that one out.
  • Misinformation and defamation. An AI system that outputs false or harmful information about an individual or brand can trigger libel suits.

Operational errors. Think of a supply chain AI sending the wrong shipment to the wrong location. Or a trading algorithm executing damaging trades.

Regulatory non-compliance. In industries such as finance, healthcare, and insurance, strict regulations exist for a reason.

High-Profile AI Mistakes

There have been some high-profile and widely publicized AI mistakes.

In 2023, Air Canada found itself in hot water after its customer service chatbot promised a traveler a discount that didn’t exist and misled them into paying full price. The airline tried to argue that the chatbot was responsible. The court disagreed and ordered Air Canada to honor the discount.

DoNotPay, a startup branded as the “world’s first robot lawyer”, faced a class action lawsuit for allegedly practicing law without a license. The platform promised to help users with legal claims through AI but lacked attorney oversight. Users argued the service was misleading, and they were correct.

IBM’s Watson for Oncology once promised to revolutionize cancer treatment recommendations. Instead, it offered “unsafe and incorrect” suggestions, according to internal documents.

And let’s not forget Microsoft’s infamous Tay chatbot. Within 24 hours of release, Tay went from playful to producing offensive, harmful content thanks to online manipulation.

The Legal Risks for AI Startups

As it stands, lawsuits where companies claim their AI system was at fault, not them, never win. You are your AI system. Some of the common legal risks include:

  • Product liability. If your AI causes harm, financial, physical, or reputational—your startup could face product liability claims. Professional errors and omissions insurance might not necessarily cover AI mistakes.
  • Contractual liability. Promising too much in your terms of service, or failing to deliver, opens the door for breach-of-contract claims.
  • Regulatory enforcement. The EU’s AI Act, California’s privacy laws, and FTC guidelines in the US are tightening AI regulations.
  • Employment law. If your AI hiring tool filters out candidates unfairly, it isn’t the algorithm’s fault.
  • IP disputes. Using copyrighted material in training without permission or generating outputs too close to existing works can lead to lawsuits. Getty Images already sued Stability AI for exactly this.

You can’t blame the AI you’re using if it gets something wrong. That type of protection doesn’t exist yet. AI startups have to be aware that, almost always, the fault is theirs.

admin

Recent Posts

Top-Rated SaaS Financial Management Tools for K-12 Schools

Efficient and accountable financial management is nonnegotiable in today’s K-12 landscape. Outdated, traditional software packages can’t keep pace with the…

1 month ago

The Invisible Hand of AI: How Autonomous Agents Are Quietly Reshaping Supply Chains

Supply chains are the lifeblood of global commerce, yet they remain plagued by inefficiencies—delays, stockouts, overproduction, and unpredictable disruptions. Enter autonomous…

3 months ago

How AI is Reshaping Creative Fields: Music, Writing & Art in the Age of Algorithms.

The Rise of AI in Creative Domains Artificial Intelligence (AI) has moved far beyond number-crunching and automation. Today, it’s playing…

4 months ago

Why Smart Technology Is Driving Business Efficiency and Innovation

Smart technology is no longer a luxury for businesses but a critical driver of efficiency, growth, and innovation. As technology…

8 months ago

Heavy Machinery and AI are Going to Disrupt Traditional Industries

The convergence of artificial intelligence and advanced machinery is poised to transform traditional industries in ways few could have imagined…

9 months ago

AI Powers Predictive Insights for Material Testing and Performance Forecasting

Artificial intelligence (AI) transforms material testing and performance forecasting by integrating advanced algorithms with traditional engineering methods. This convergence enables…

10 months ago