Articles

Fake generation

Back on January 31, 2022, we published on Laila's blog the first article composed of a generative algorithm, to be clear an algorithm that follows the same technological architecture of Chat GPT developed by OpenAI.

It wasn't the first and it wouldn't be the last definitiva we published 4 always explicitly indicating who they were written by.

But the experiments went further, we generated dozens of contents until the algorithm, within a focus on real estate marketing prospects in 2023, reported the following quote:

“Hong Kong Home Buyers Association President Mark Chien-hang told the South China Morning Post that although the value of newly completed homes in the territory is declining, the sales volume is still high. “This is because people with solid investment plans have already entered the market from abroad,” said Chien-hang.

element of reassurance

This quote appeared as a element of reassurance fits perfectly into the context of the article. But before sending it online we decided to verify the source for the sake of completeness with the aim of citing it appropriately. Well, what we found out is that Mark Chien-hang doesn't exist, just as there is no Hong Kong Home Buyers Association. The cited text does not appear on the web and if someone were to think that there may be archives of information that feed these AIs that are not directly accessible online, well, that's not the case: the quote is completely invented!

Generative AIs have the ability to “generate” content in the style and form of the content they are trained on, but they do not “store” the data and are unable to restore it to its original form. Generative AI works like black boxes, made up of artificial neural networks to which no form of reverse engineering. In other words, the training data becomes numbers and there is no direct relationship between these numbers and the data that generated them.

To date they still do not find any commercial application due to the risk that they can provide completely incorrect information in sensitive contexts.

The OpenAI Terms of Use for ChatGPT read: 

Innovation newsletter
Don't miss the most important news on innovation. Sign up to receive them by email.

«Inputs and Outputs are to be considered collectively “Contents”. [...] The user is responsible for the Contents as regards the guarantee that they do not violate any article of the Law [...]»

If the information generated by ChatGPT exposes those who use it to the risk of publishing false, defamatory or other violations of the Law, for this very reason OpenAI protects itself by declining any responsibility for the disclosure of contents generated by its own artificial intelligence.

At the moment the great minds of the machine learning they have not yet been able to produce anything better than systems that answer every question only by lying through their teeth. And if these AIs sometimes respond correctly, it is a mere coincidence and nothing more.

Article of Gianfranco Fedele

Innovation newsletter
Don't miss the most important news on innovation. Sign up to receive them by email.

Latest Articles

The Benefits of Coloring Pages for Children - a world of magic for all ages

Developing fine motor skills through coloring prepares children for more complex skills like writing. To color…

May 2, 2024

The Future is Here: How the Shipping Industry is Revolutionizing the Global Economy

The naval sector is a true global economic power, which has navigated towards a 150 billion market...

May 1, 2024

Publishers and OpenAI sign agreements to regulate the flow of information processed by Artificial Intelligence

Last Monday, the Financial Times announced a deal with OpenAI. FT licenses its world-class journalism…

April 30 2024

Online Payments: Here's How Streaming Services Make You Pay Forever

Millions of people pay for streaming services, paying monthly subscription fees. It is common opinion that you…

April 29 2024