Ever wondered what sentient artificial intelligence would look like, and how we as human species would interact and socialize with the program? Through the medium of film, literature and video games we have been able to dive head deep into the hypothetical. But few technologies have come close to cracking the artificial intelligence code.
However, a San Francisco lab has recently almost achieved an authentic artificial intelligence. Open AI - a research business co-founded by Elon Musk, unveiled a technology on June 11th 2020 after several months in the making: GPT-3.
Get to know GPT-3
GPT-3, or Generative Pre-trained Transformer, is the third version of this tool to be released. It generates text using algorithms that are pre-trained – it has already been fed all of the data it requires to carry out its task: around 570gb of text information, gathered by crawling the internet (through a publicly available, analytic dataset known as CommonCrawl.
GPT-3 has spent months analyzing thousands of digital books, nearly a trillion words posted to blogs and social media, Wikipedia and the rest of the internet. In short, GPT-3 can create anything that has a language structure – which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.
Shockingly Good
A humble newspaper known as The Guardian wrote an article about GPT-3 using the AI to draft the language used. It speaks from a 1st person perspective and you could really believe it was someone speaking to you directly. The only thing that changes the matter is that it was edited by an authentic human to make it a smoother read.
There have been many people from technology backgrounds trying out and experimenting with GPT-3 such as Kevin Lacker co-founder & CTO of Parse and he has tested the tool further. He ran GPT-3 through the Turing Test: a method of inquiry in artificial intelligence, determining a computer’s capability of thinking like a human being - through common sense, trivia and logic. Through his findings, Lacker states the technology is “quite impressive in some areas, and still clearly subhuman in others”, also noting how artificial intelligence can often struggle with the concept of “common sense”. You can find GPT-3’s test results and answers here.
Sharif Shameem at debuild.co has built some really exciting demos using GPT-3. He explains that using GPT-3 is “far more exciting than writing JSX code”. Just describe what your app should do in plain English, then start using it within seconds – An example of Shameen’s demos can be found here.
Let’s be objective
GPT-3 is not without it’s flaws it has a lack of semantic understanding and doesn’t have any understanding of the words it churns out, lacking semantic representation of the real-world. It suggests that GPT-3 is devoid of pure common sense, and, therefore, can be fooled into generating text which is incorrect or even racist, sexist and incredibly biased. GPT-3 itself, like most neural network models, is a black box where it’s impossible to see why it makes its decisions. GPT-3 has the same architecture as GPT-2, and the only difference is the vast scale. GPT-3 suffers from similar disadvantages of not understanding real-world sensibility and coherence, like its predecessor GPT-2.
Business Advisor and Author, Bernard Marr describes the tool as being “better at creating content that has a language structure… than anything that has come before” – making it perhaps, the most powerful coding tool akin to human intellect. However, the tool requires large amounts of computer power to function, and still struggles undertaking more complex tasks.
But there is still hope – with some fine-tuning and price-drops in technology, it could outperform its competition, once in the hands of the public.