Andrea Colameci invented a philosopher, presenting him as an author, and secretly produced books with the help of artificial intelligence about manipulating reality in the digital age.
People were fooled. Accusations of dishonesty, bad ethics and even illegality were advertised.
But the man behind it, Koradisi, claims it is not a hoax. Rather, he describes it as a “philosophical experiment” and says it helps to show it A way in which AI “destroys the ability to think slowly but inevitably.”
Collamedici is a topical text written by existing philosopher Jianwei Xun, an Italian publisher and existing philosopher who, together with two AI tools, produced “hypnosis, musk, and real architecture.”
In December, Corradisi's media printed 70 copies of the Italian version he believed to have translated. Still, the book quickly attracted oversized attention, covered in German, Spanish, Italian and French media, and quoted by high-tech celebrities.
“Hypnocracy” describes how people use technology to shape their perceptions in “hypnotic stories.”
The book's publication came as schools, businesses, governments and internet users around the world struggled with the use of AI tools that have been made widely available to technology giants and startups. (The New York Times sued Openai, the creator of Chat GPT, and its partner Microsoft, alleging that it infringes its news content. The companies denied the allegations in the lawsuit.)
However, the book turns out to be a demonstration of the paper, playing an unconscious reader.
The book stated that it was intended to demonstrate the risk of “cognitive indifference” that can be developed if thought is delegated to machines and people do not cultivate their own discernment.
“I tried to create a performance. It's an experience that's not just a book,” he said.
Koramisi teaches what he calls the “art of prompts,” or how to ask AI smart questions and give practical instructions. He said he often sees two extreme reactions to tools like chatgpt. Many students want to rely exclusively on their behalf, and many teachers believe that AI is inherently wrong. Instead, he tries to teach users how to identify facts from manufacturing and how to engage in a tool productively.
The book is an extension of this effort, Koradisi argued. The AI ​​tools he used helped him refine his ideas, but the clues (reality and inventions) about fake authors (online and in books) intentionally suggested potential issues to encourage readers to ask questions, he said.
For example, the first chapter describes fake authors. The book contains vague references to Italian culture, which are unlikely to come from young Hong Kong philosophers.
Sabina Minardi, editor of Italian outlet L'Espresso, released Jianwei Xun as a fake earlier this month, picking up the clues.
Collamedici then updated the Bio page of the fake author and spoke to publications that included people who were fooled by his work. The new edition and excerpt printed this month come with an additional note about the truth.
But some of the first people who accepted the book now reject it and question whether Korandaci acted unethical or violated European Union law for the use of AI.
French news outlet Le Figaro writes about “l'affaire jianwei xun.” The “problem” of previous interviews with Hong Kong philosophers explains that “he does not exist.”
Spanish newspaper Elpais retracted the book's report and replaced it with a memo saying, “The book was violated by the new European AI law, which failed to recognize AI's involvement in the creation of the book.”
Article 50 of that law states that if someone uses an AI system to generate texts to “notify the public about issues in the public interest,” it must make clear that generative AI was used (with limited exceptions), according to Noah Feldman, a law professor at Harvard University.
“That provision in that face appears to cover the creator of the book, and it appears that everyone is probably republishing it,” he said. “The law will not come into effect until August 2026, but it is common in the EU to want to follow a law that appears to be morally good even if it is not technically applied.”
Jonathan Gittlain, a professor of law and computer science at Harvard University, said he tends to call Colandici's books “performance art using pen names, or simply marketing.”
Koramisi is disappointed that several first champions have condemned the experiment. However, he plans to use AI to demonstrate the extremely dangerous it raises. “This is the moment,” he said. “We put our cognition at risk. It's about using it or losing it.”
He said that Jianwei Xun will explain it as a group of humans and artificial intelligence, teaching courses on AI next fall.

