News & Insight

Opinion January 13, 2025
Nothing new under the sun: artificial intelligence, rise of the bots and parallels with company law

Nothing new under the sun: artificial intelligence, rise of the bots and parallels with company law

Law, regulation and taxation are the scaffolding that surrounds our social contract.  They are going to need to adapt lest it not fall down in the wake of developments within artificial intelligence (“AI) and Web3.

Google published a white paper in September 2024 called ‘Agents’, another word for the more colloquial ‘bots’.  The paper begins by saying that a “combination of reasoning, logic, and access to external information that are all connected to a Generative AI model invokes the concept of an agent, or a program that extends beyond the standalone capabilities of a Generative AI model.”

If one has a basic understanding of what a large language model is, then the key words in that definition from Google are “extends beyond”.

Agents and bots

And the paper defines an agent – hereafter we use the term ‘bot’ – as “an application that attempts to achieve a goal by observing the world and acting upon it using the tools that it has at its disposal. Agents are autonomous and can act independently of human intervention, especially when provided with proper goals or objectives they are meant to achieve. Agents can also be proactive in their approach to reaching their goals. Even in the absence of explicit instruction sets from a human, an agent can reason about what it should do next to achieve its ultimate goal.”

How will bots be regulated?  For designers and builders of bots, there is a wave of consumer protection incoming.  From a conceptual perspective that is, however, the easy bit.  What is much harder is reaching jurisprudential consensus on what these bots actually are and whether they are deserving of legal personhood.

If that does not happen – noting the words “observing the world and acting upon it” from Google’s paper – then law, regulation and taxation have only the designers and builders of AI to engage with whereas – as Google say – “Agents are autonomous and can act independently of human intervention”.

Prior line of thought as to AI personhood

You might think that we have veered into the realms of science fiction already, but this is not a new line of thought.  The EU looked seriously at AI having a separate legal personality back in 2017 but there was well-publicised push back in 2018 from experts across a wide range of industries and professions.

That, however, was all well before the latest versions of ChatGPT (and the other large language models) were released and it became clear to anyone using them how artificial intelligence could have effects in the real world and that pretending that they do not exist or banning them will bring about an unsatisfying result.

The point is – if one is to believe the hype – that AI bots will soon be upon us with hitherto unimagined levels of capability and they may not always be under the control of their creators, and probably won’t be in many cases.

Analogy with companies

One might at this point then turn to Ecclesiastes 1:9 – which says “what has been is what will be, and what has been done is what will be done; there is nothing new under the sun” – and prudently observe parallels from the history of company law and legal personhood for corporations.[1]

Since at least the 16th century, law makers have provided for companies and corporations so that natural persons can share risk and resources in an entity that has its own separate legal personality.  The future shareholders of the East India Company asked Queen Elizabeth I in 1599 for a royal charter, and noted “that the trade of the Indies… cannot be traded but in a joint and a united stock”.

At the time, this was a novel and sophisticated development: law makers were allowing to be brought into existence an intangible legal ‘person’ that could interact within that social contract and could itself have rights and responsibilities, and which in turn be regulated and taxed.[2]

UK company law as soft power

Hundreds of years of company law was since written up and fine-tuned in line with prevailing market practice and public policy; and the UK arguably has the oldest and most sophisticated company rule book of any jurisdiction anywhere in the world.

Those rules have developed incrementally to move with the times, subject to the occasional overhaul, most recently being the Companies Act 2006.  And the UK has effectively exported and continues to export those rules into the Commonwealth and many other common law jurisdictions as a generator of its soft power globally.

For instance:

  • Australia’s Corporations Act 2001 evolved from UK corporate legislation, specifically the UK Companies Act 1862 and the amendments to the same through the 19th century;
  • South Africa’s Joint Stock Companies Limited Liabilities Act 23 1861 of the Cape Colony was derived from the English Limited Liabilities Act 1855;
  • when the Companies Act 1862 introduced the concept of limited liability into UK statute, Canada swiftly followed suit and passed ‘An Act to authorize the granting of Charters of Incorporation to Manufacturing, Mining, and other Companies’ in 1864 to achieve the same result;[3]
  • New Zealand’s Companies Act 1955 “was almost an exact copy of the United Kingdom Act of 1948” as “the following of English precedent has been a tradition of New Zealand company law since the first Act of 1860”;[4] and
  • the Companies Act 1981 of Bermuda, and specifically the requirement to form a company by way of a memorandum of association, is derived from the UK’s Companies Act 1948.

We could go on; and if we did it would be a long list.

And the UK’s Privy Council has acted as the highest court of appeal for various parts of the British Empire and latterly the Commonwealth countries.  As recently as November 2024, the Privy Council was sitting as the final court of appeal for the Cayman Islands in a case involving the rights of minority shareholders.

This has all been hugely beneficial to the UK and – in its time – the British Empire.

Ownership and control by natural persons

In all that time, that key legislative assumption has not been subject to sensible challenge: companies are owned and controlled by natural persons (even if one needs to go up through a group structure to get to the ultimate beneficial owners).

But as is often the case in Web3, truths previously thought to be self-evident are now subject to challenge.  It is now fairly easy to imagine that one or more offshore jurisdictions – seeking to direct activity their way – will legislate for companies being incorporated by AI, and in doing so presumably that jurisdiction will need to legislate for the conditions upon which AI can achieve legal personhood.

Whoever does that will have considerable first-mover advantage.

Bots

Paraphrasing the quote from Google that we began with, an AI bot can be thought of as a software program or system that carries out its functions automatically and without human interaction.  Some examples follow, but there are many others:

  • Chatbots are the most common category of bot.  No doubt you have encountered these before – asking ChatGPT to explain something in 10 words or less, summoning Amazon’s Alexa or generating an image using Meta AI are all common day-to-day interactions.
  • Social bots are automated programs designed to engage with users (and other social bots) on social media platforms.  They do not need to have conversational capabilities like a chatbox, instead providing a ‘like’ or ‘follow’ in relation to specific content.  While social bots may be a useful tool (e.g. by providing the latest sports scores), they have garnered a negative reputation for spreading misinformation, encouraging negative commentary and hate amongst other social media users and augmenting reputational damage.[5]
  • Monitoring bots are designed to track and monitor specific activity on websites, social media platforms and security systems to provide insights, identify trends or flag potential threats to the seamless operation of the system.
  • Googlebot, Bingbot and Slurp are all examples of web crawler bots – bots that download and index content across the Internet to optimise information retrieval.

Bots are even now interacting with natural persons and corporations.

Use-cases for bots

Here are some real-world use-cases:

  • Woebot is a mental health chatbot that employs generative AI to provide its users with therapy and mental health support. With nearly 1.5 million users since its inception in 2017, its creator, Woebot Health, was named the ‘Best Overall Mental Health Solution’ in 2023.
  • In 2018, L’Oréal partnered with chatbot Mya to automate the review of job applications received. The beauty giant confirmed that because Mya was able to review the c.12,000 applications received in relation to 80 internship positions, 200 hours of time was saved on the hiring process.
  • H&M’s Kik chatbot provides users with personalised fashion and style tips by reviewing the user’s input or request, H&M’s products and the results of the user’s personality quiz. The Kik chatbot has approximately 15 million monthly users.
  • US investment fund Voleon recently developed and launched its own AIM BOT, a fully automated, AI driven robot designed specifically to run a trading platform focused on identifying key trends in volatile markets, particularly crypto markets.
  • Earlier this year, Marc Andressen, co-founder of Andressen Horowitz, gave $50,000 in Bitcoin to a bot called Truth Terminal. The bot, following the suggestion from its creator Andy Ayrey, used this funding to launch and endorse its own token and subsequent meme coin called ‘goatseus maximus’ or ‘GOAT’.  Within 10 days of its listing on Coinmarketcap, GOAT was reportedly valued at $878 million.  You could not make it up if you tried.

Most of what bots can do currently is fairly limited in scope and bots are at time of writing not displaying features of general artificial intelligence rivalling that of humans outside of the performance of very specific functions.

That is expected to rapidly change in the coming years.

Query how that legal scaffolding in the social contract applies to bots?  Law makers, regulators and taxation authorities are not (yet) fully engaged with the issue because the existing rule book only applies to natural persons and corporations with that legal personality (who are in turn owned and controlled by natural persons).

Bots are neither of those.

EU AI Act and UK AI legislation

Bots could be regulated by the EU’s Artificial Intelligence Act (AIA) although only to the extent that the AIA seeks to regulate AI systems more broadly and protect the rights of natural persons.  The UK, no longer an EU member state, is still formulating its own AI legislation and last published a white paper in March 2023 under the Sunak government.[6]

Much of that provides scaffolding to guard against bad actors in the design and building of bots, and so at the point of launch, there will be natural persons or corporations against whom those obligations can be enforced in the courts (and regulation backed up with stiff penalties acts as a deterrent).

This, however, is in substance consumer protection legislation only and nothing so radical as legislating for the conditions under which a bot could achieve personhood.

A mind of their own

The AIA does not adequately deal with the fact that bots have or will eventually have – literally – a mind of their own and could carry on doing whatever it is they were designed to do in perpetuity.  None of that tells us what happens when bots start building other bots.  None of that explains what happens when bots exist completely beyond the control of their creators.

Some other jurisdictions have introduced or are introducing their own legislation but none of it addresses this rather large elephant in the room, that of legal personhood and an existence recognised by law that is separate from its designers and builders.

China: Interim Measures for the Management of Generative Artificial Intelligence Services 

In 2023, the Interim Measures for the Management of Generative Artificial Intelligence Services became the first example of AI regulation in China.  Introduced by various government administrations in China, it is widely regarded as AI-friendly, having converted harsher obligations from previous drafts (e.g. obligations to filter out unlawful content) to more lenient provisions (e.g. to simply report how said filtering process works).

However, its failure to define bots, generative AI or even just AI creates uncertainty regarding its scope and applicability.

It is important to note that there is various other legislation, such as The Administrative Provisions on Deep Synthesis in Internet-based Information Services 2023, that may indirectly impose restrictions on the development and use of AI depending on its use-case.

Japan: Basic Law for the Promotion of Responsible AI

In February 2024, Japan’s Liberal Democratic Party (“LDP”) published a draft bill named the ‘Basic Law for the Promotion of Responsible AI’ in February 2024.  While this proposed legislation is yet to be approved, if enacted it would be the first example of hard legislation that focuses on the regulation of AI in Japan, specifically powerful generative AIs (called ‘frontier AI models’).

The proposed legislation further imposes obligations on AI developers to implement robust systems and conduct diligent reporting and supervision of various processes.  It also establishes strict penalties imposed on those who breach said obligations.

However, it remains silent on the specific regulation of bots.[7]

Singapore: Model AI Governance Framework for Generative AI

While Singapore does not have any AI-specific legislation, in May 2024 the government introduced the Model AI Governance Framework for Generative AI.  It takes a principles-based approach, listing high-level guidelines that should be adhered to by businesses that use or create AI.

The framework makes a singular reference to bots, suggesting that governments worldwide should implement “digital literacy initiatives to encourage safe and responsible AI use” which “could include educating end-users and how to use chatbots safely”.  This does not provide any further insight into how bots can be regulated, it simply acknowledges their existence.

No plans to introduce binding legislation that governs AI in Singapore have been announced at time of writing.

South Korea: Basic Act on the Development of Artificial Intelligence and Establishment of Trust

In December 2024, South Korea’s National Assembly passed the Basic Act on the Development of Artificial Intelligence and Establishment of Trust (the “Basic Act”) and became the second jurisdiction (following the EU) to introduce comprehensive AI legislation.

Informed by an array of political parties, one of the key tenets of the Basic Act is the establishment of a framework that regulates the use and consequences of ‘high-risk AI systems’ (being AI that has a significant impact on public health and safety or fundamental rights) and ‘generative AI systems’.

Some AI bots will be classified as ‘high-risk AI systems’ and most as ‘generative AI systems’.  It therefore remains to be seen whether the Basic Act will encourage or erode the use of AI bots within the jurisdiction given their applicability for elusive use-cases and unknown potential.

The new legislation will take effect in January 2026.

United Arab Emirates (“UAE”): The UAE Charter for the Development & Use of AI

Neither mainland UAE nor its financial free zones (collectively the Abu Dhabi Global Market and the Dubai International Financial Centre) have enacted hard legislation at the time of writing concerning the regulation of AI bots or AI in general.  Instead, these jurisdictions rely on amending non-binding guidelines or amending sector specific legislation currently in force.[8]

For example, the UAE’s Artificial Intelligence, Digital Economy, and Remote Work Applications Office introduced a UAE Charter for the Development & Use of AI in July 2024. The charter sets out key principles that underpin the UAE’s AI strategy but does not go any further, and does not consider AI bots specifically.

United States (“US”)

It may come as a surprise that the US does not have any federal legislation to specifically regulate AI. Rather, specific states and industries have developed a collage of legislation that may be relied upon to regulate AI, and AI bots in particular.  It would be incorrect however to assume that no progress has been made on a federal level.  In fact, there are currently over 100 bills that endeavour to govern AI under consideration by Congress.[9]

With the US economy projected to experience continued growth and Trump’s inauguration scheduled for later in the month of writing, it is no wonder that the US AI market is estimated to be worth c. $66 billion in 2025.

As such, it is likely that the US will take a leading role in the regulation of AI, both in scope of governance and volume of legislation.  Whether said regulation focuses on the real issues at hand here however is another question.

California

For example in California, the Bolstering Online Transparency Act (reduced – intentionally or otherwise – to the acronym, “BOT”) which came into effect in July 2019, defines a bot as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person”.

Section 17941 within further states that “it shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity… in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.’ BOT does then note however that “if the person discloses that it is a bot” this will be a valid defence.

New York

Similarly in New York, the bill ‘An act to amend the general business law, in relation to liability for false information provided by a chatbot’ is currently under review by the Senate Internet and Technology Committee.  It defines a chatbot as “an artificial intelligence system, software program, or technological application that simulates human-like conversation and interaction through text messages, voice commands, or a combination thereof to provide information and services to users”.  Crucially, the purpose of the bill is to “assign liability for the actions of chatbots to the proprietors of such chatbots”.  However, it does not delve into who holds liability for the actions of any bots created by the chatbot.

The UK’s position on the global stage

At a time when the UK is watching its relevance on the world stage inexorably seep away, we at HLaw posit that there exists an opportunity to build scaffolding enforceable under English law to legislate for:

  • the existence of those very bots;
  • what conditions must be satisfied for a bot to be a legal person;
  • what it means for a bot to be an owner of property;
  • how natural persons and corporations can enter into enforceable contracts with bots;
  • liabilities created by bots for natural persons and corporates;
  • profits accumulated by bots that might be available to be taxed;
  • what it might mean for a bot not be in a position to satisfy its debts as they fall due and be insolvent;
  • and so on.

That would require somebody here in the UK to take some brave steps in jurisprudence, but it is an area in which the UK can still lead the world and these – we would argue – are steps that will eventually need to be taken if one assumes that some form of generalised artificial intelligence will eventually emerge from the large language models.  If other jurisdictions get there first, the UK and the EU – whether they like it or not – will be forced to deal with AI bots that are recognised locally as having some form of legal personality.

425 years ago, Parliament gave the East India Company a royal charter to form a company and with that in hand an empire was built on which the sun never set.  Perhaps the UK has the opportunity at this very moment to take a similarly bold step in AI and to be a first mover.

This piece was written by Henry Humphreys and Alina Merchant-Mohamed.  Most of it is mere opinion and conjecture and should be treated as such.  Please reach out to a member of the HLaw team if you would like to discuss any legal or regulatory queries relating to company law or Web3, crypto and digital assets generally.

All the thoughts and commentary that HLaw publishes on this website, including those set out above, are subject to the terms and conditions of use of this website.  None of the above constitutes legal advice and is not to be relied upon.  Much of the above will no doubt fall out of date and conflict with future law and practice one day.  None of the above should be relied upon.  Always seek your own independent professional advice.

Humphreys Law

 

 

[1] Every law UK school student comes across the much more recent case of Salomon v Salomon & Co. Ltd. [1897] AC 22 in which Mr Solomon personally was shielded from the claims of a creditor of the insolvent company in which he was the shareholder.

[2] And until the 19th century and the Joint Stock Companies Act 1844, a corporation could only be established by Royal charter or act of Parliament.

[3][3] See paragraph 28 of the Act, which limits the liability of shareholders, at page 181 of the book of Statutes of the Province of Canada.

[4] New Zealand Law Commission, Company Law – Reform and Restatement, Report No 9 (June 1989), 8.

[5] CHEQ, a cybersecurity company that tracks bots, estimated that during the 2024 Super Bowl weekend, 75.85% of the supposed traffic from X to its advertisers’ websites was fake.  A significant proportion of this traffic comprised of bot activity (rather than human interactions) on X.

[6] There has been some indication that existing legislation focused on specific areas will be amended to address bots.  For example, Ofcom issued an open letter to online service providers operating in the UK to reiterate that the Online Safety Act (2023) will apply to generative AI and chatbots.

[7] In the meantime, soft law, such as LDP’s second AI White Paper 2024 and the Ministry of Economy, Trade and Industry’s AI Guidelines for Business Appendix Ver1.0 2024, continue to encourage discourse in, development of and regulation of the AI sector, with the ultimate goal of positioning Japan as the “world’s most AI-friendly country”.

[8] For example, the DIFC Data Protection Regulations were amended to include Article 10, which governs ‘personal data processed through autonomous and semi-autonomous systems’.

[9] These bills have been collated here.

If you would like to contact a member of our team, please get in touch by filling in the form below.

"*" indicates required fields

Humphreys Law