AI Chatbots – a non-human language community
11 March, 2023 by
AI Chatbots – a non-human language community
In our recent article "ChatGPT : Useful or not?" we talked about the use of ChatGPT. At present, ChatGPT is far from perfect, and whether it is useful can be regarded as a risk management assessment problem, just like you may choose to rent a taxi with a driver instead of taking an AI self-driving taxi, or accept the risk of an aircraft accident and travel by plane.
However, in some cases, we may still not have sufficient reasons to consider applying ChatGPT in our daily life today. Unlike taking a taxi or flying, users of ChatGPT are not yet legally protected. For example, if you were harmed by misleading or inaccurate information generated by ChatGPT, it seems unlikely that service providers such as OpenAI or Microsoft will be liable for your losses. This is different from taking a taxi or traveling by air. Passengers in most civilized cities in the world today are protected by law when using various means of transportation, and there is an insurance system to protect the interests of passengers in the event of traffic accidents.
For general Internet users, search engines (such as Google Search) only provide users with a collection of information sources (websites and URL links), and users can choose to browse content suitable for their search purposes. Search engine providers are not content providers; no matter whether the information content is true or fake, web content is provided by many website publishers or network media service providers.
Accessing online chatbot services using artificial intelligence-based large language models (AI LMM) can be understood as further extension of Internet search actions. When chatbot service providers (such as ChatGPT's OpenAI) build their machine learning models (i.e., the result of specifying tens of billions or more parameters), they make sure that chatbots can "learn" by providing controlled data inputs during the AI training process. For instance, when ChatGPT was first launched in late 2022, OpenAI trained ChatGPT using textual content that appeared before 2021. While training data, information or content comes from many different sources (OpenAI says it has used Wikipedia, books, news articles, scientific journals to train ChatGPT), chatbot services provide users with only "curated editorial content”.
A chatbot service provider is a lot like a content provider – it has control over what its service offers. From another perspective, we can also think of chatbot service providers as today’s book, magazine publishers, or even as writers.
So the next step as a public might be to ask the question: what are the responsibilities and obligations of chatbot service providers? Just like how we think of book, magazine publishers or media content providers today.
When a chatbot service provider charges a fee for using its service, consumers naturally want to know how the service provider ensures the quality of the content it provides, such as truthful information, fair views, and impartiality. The society may also require service providers to adhere to certain legal and ethical standards (such as non-infringement of copyright, fair dissemination of information, non-discrimination, etc.).
Today, the use of AI LLM is just beginning, as if we are at the beginning of the invention of paper and publishing in human history. However, the evolutionary process is much rapid than before. 

Suggested reading
The Age of AI has begun, Bill Gates, the Founder of Microsoft.

Photo: Gladstone's Library. Located in Hawarden, Wales, UK.
Image credit : Wikimedia Commons

Share this post