AI Misinformation


These opinions reflect the thoughts of our founder and do not constitute legal advice. We routinely update this page to combat AI misinformation and to correct our own ideas about a chaotic, ever-changing industry.


Since 2019, our mission at Hotpot has been to simplify graphic creation and image editing. Vision AI back then was constrained to fancy filters and specific domains like image matting and background removal. Natural language processing (NLP), however, had already experienced profound advances thanks to Google Brain and other innovative labs.

Vision AI finally arrived in 2022.

Thanks to OpenAI, Google,, and others, we now enjoy a world where AI empowers anyone to create beautiful art and images, similar to how the iPhone empowers anyone to create beautiful photos. This revolutionary technology has naturally garnered worldwide attention -- and scrutiny.

At this inflection point, AI leaders must educate the public about the costs and benefits of AI. Far too often, tech leaders ignore public discourse and fixate on building the future. This regrettable void becomes filled by cynics who exploit fear to tell stories and create misleading narratives, ultimately inhibiting the progress of essential innovation and trapping society in the past.


  • Are AI images copyrightable?

    The question technically remains open. Courts have issued conflicting opinions and are examining multiple cases.

    However, if users can copyright iPhone photos, produced with the click of a button and powered by AI, it only seems logical and consistent to allow copyrighting of AI images and art.

  • How should we regulate AI?

    Like any powerful technology, AI can both empower and harm.

    If you're struggling to reconcile the risk-reward tradeoff, how would you regulate the printing press?

    The Communist Manifesto and Mein Kampf engendered the death and destitution of millions, setting back humanity by decades. Despite this destruction, no reasonable person would claim society is better off with the printing press locked up in a cage controlled by a few gatekeppers.

  • How do we prevent AI harm without strict regulation?

    The same way we minimize harm with the printing press, the computer, and other powerful technologies: at the human layer.

    By definition, general technology can be leveraged for any purpose and is limited only by human ingenuity.

    Crippling AI to minimize harm also cripples benefits, stifling progress and hurting those who need AI most. Imagine crippling books and computers to contain their destructive potential.

    The solution is simple and grounded in precedent: penalize people who use AI to hurt others. Do not penalize those with legitimate aims.

  • Are AI systems intelligent?

    Debating "intelligence" is semantic quicksand. Intelligence cannot be objectively defined. Channeling Potter Stewart is unconstructive and merely distracts from the two most pressing questions facing AI: (1) can AI help humanity? and (2) is AI controllable?

    Cars don't move like horses and planes don't fly birds, yet these technologies reshaped society and elevated living standards. Conversely, non-intelligent viruses can threaten humanity and shut down the world.

    Benefit and control, not human intelligence, are what matter.

    The phrase "artificial intelligence" is misleading and spurs endless discussion over amorphous words. Instead, "augmented intelligence" feels more appropriate for the near future and may concentrate discourse on the tangible effects of AI -- that is, augmenting individuals by making each person smarter and more capable.

    Even if researchers never develop generalizable intelligence, AI on its current trajectory could still immensely enrich humanity. Consider how many tasks are the equivalent of statistical models, where Alice must do X if condition Y is met with 90% probability. Delegating these baby-blue-collar jobs to machines and allowing humans to focus on what they do best -- create -- will catapult society forward.

  • Do AI images debase art?

    It's tempting to say yes, but consider the clothes you're wearing and those worn by 99% of the population.

    Off-the-rack clothes cannot compete with handcrafted ones and do not undermine the craftmanship of tailors. If anything, they increase appreciation and demand for garments painstakingly fashioned by a professional.

    Like with apparel, AI images cannot match the creativity and flair of images from elite artists. Trained eyes can readily spot defects with AI art as they can with off-the-rack clothes, but these flaws are acceptable or even unimportant to normal consumers, even if cringeworthy to experts.

    It is a mistake to lump AI images with human images the same way it is to lump machine clothes with human clothes. AI images serve a different purpose. They are meant to let the masses express imagination and emotions in an affordable way, similar to how machine clothes let the masses dress and express themselves in an affordable way.

  • Is AI art theft?

    No. The training principles of AI art are identical to the training principles of human art. Human artists download images via browsers and learn from them. AI models replicate this process, but use scripts instead of browsers to view images.

    The difference is not in principle, but in scale. Computers can study every piece of art ever made whereas humans cannot.

    Critically, AI models do not store images. They analyze image patterns and properties and transform these observations into mathematical functions.

  • Is AI just lossy compression?

    Only if you believe computers are just calculators.

    Yes, AI may regurgitate words and images when prodded, but this represents a microscopic fraction of what AI language and vision models do. It's analogous to suggesting computers can only add and subtract.

    Many breakthroughs arise from synthesizing ideas across domains and combining knowledge in novel ways. This is one of AI's most impactful capabilities, empowering people to merge disciplines and manifest ideas more fluidly.

    Today, authors, programmers, artists, and other skilled professionals already use AI to catalyze productivity and creativity in meaningful ways.

    Debunking the compression claim is trivial with an AI image generator. Plumb the depths of your imagination for fanciful, novel ideas then ask AI to render them. If AI is just lossy compression, how can it produce images never seen before?

  • Should we worry about AI spewing misinformation and bigotry?

    Yes, absolutely. Researchers from Google, Meta, OpenAI, and other labs are racing to check AI hallucination and mitigate bias. Hallucination is a tractable problem and should be solved within short order. Bias is thornier and already improving, but comprehensive solutions will take longer.

    Ultimately, AI misinformation and biogtry present worrisome risks and must be tackled conscientiously.

    But in 2023, humans are a far larger fountain of misinformation and much greater source of concern. Due to intensifying media competition, pundits are afforded fewer resources and less time to think before sharing opinions, leading to engaging -- but incorrect -- interpretations.

    For instance, organizations and elected officials commonly propagate misinformation on inflation, omitting or minimizing government behavior and monetary policy as causal factors.

    While researchers scrub misinformation from AI and engineer safer models, consumers can start to curb human misinformation with one simple question asked of experts, "What arguments falsify this assertion?"

  • AI is dangerous. Isn't censorship the solution?

    AI is indeed dangerous. It is imperative to honestly address the systemic risks of misinformation and bias.

    However, censorship is no more the solution for AI than it is with the printing press. The Communist Manifesto and Mein Kampf are lethal books that spawned death and destitution for millions.

    Yet we don’t censor printing presses, and for good reason. We educate people on the destructive consequences of communism and Nazism, and we penalize those who adopt tactics to willfully harm others.

    Free societies are rooted not in censorship, but education.

    The framework is twofold: education and regulation at the human layer. Teach people how to spot misinformation (see above), and penalize those who deliberately harm others. Human problems require human solutions.

  • Should we pause AI development?

    This idea is popularly floated when debating general technology with the potential to harm.

    It's thought-provoking and delightfully attractive. But wrong.

    Humankind’s curiosity and ingenuity are boundless. It's naive to believe others will stop voluntarily and arrogant to think Western countries can halt global innovation.

    For government officials, history is unequivocal: from Mongolian horses to American bombs, those who lead in technology lead the world.

    Every country is acutely aware of this lesson and is feverishly competing to climb the 21st century's totem pole. Pandora's box is open. There is no choice for America but to simultaneously pursue both AI advancement and AI safety. These goals are not mutually exclusive.

    Pausing AI simply squanders America's tenuous lead and allows other nations to leapfrog us.

    Crucially, halting development postpones the day society can start to reap AI's transformative effects on healthcare, education, and living standards.

  • If not cynics and hypesters, who should we listen to?

    In the modern world, everyone has a voice. The Internet and social media offer a virtual megaphone where anyone can speak to 8 billion people with the click of a button. This is good and bad.

    It wasn't always this way. Reader access and broadcast ability was tightly gated not long ago, reserved only for privileged insiders. While patently unfair, this power structure was not meritless: it increased order and surface harmony while decreasing baseless assertions like a flat Earth.

    Technology fortunately swept away this backward and broken era, but generated a new wave of challenges. Who, among the sea of voices, should we listen to? Which words should we trust?

    Ironically, the best people to follow are those who express uncertainty. They are less entertaining but more informative. Few things, especially important things, are 100% certain. Few people are experts on multiple topics, let alone one. Those who communicate doubt are more likely to share counterarguments and provide 360-degree viewpoints, not myopic hot takes.

    The most trustworthy words are falsifiable. If manufacturers claim a breakthrough, do they let users see for themselves? If critics claim uselessness, do they sincerely explain why others see utility? In short, trust opinions coupled with falsification mechanisms. The second-most trustworthy opinions are those forged by extensive falsification efforts.

    Sadly, media companies don't put people on TV or publish columns of uncertain opinions. Instead, they reward sensationlized opinions.

  • What's the easiest way to spot misinformation?

    Ask for falsyifing counterarguments. Proving yourself wrong is the only way to prove yourself right.

    If conclusions are based on a research study, ask three questions: (1) were all causal variables included? (2) was a proper control group used? and (3) was the sample statistically significant?

    Proper analysis must compare differences between groups -- not within groups. Otherwise, the study may conflate correlation with causation. For example, take an investigation into the relationship between candy and muscle growth. Evaluating only professional football players, instead of the general population. may suggest that eating candy enhances muscle generation.

    It's also mandatory to consider all potential causes. A study on the drivers of earning capacity, for example, must account for all potential factors, including but not limited to education, attitude, proficiency, communication ability, interpersonal skills, and risk aversion. The challenge is that many variables are not objectively quantifiable, which means not publishable. For instance, is 5 years of software engineering at Google equal to 5 years of software engineering at Acme Company?

  • How will AI impact equality?

    There are thousands of critical questions, and it's vital to recognize which ones matter most.

    Everyone is equally poor under communist regimes, and this is clearly undesirable. Conversely, professional athletes are treated unequally, and this produces ideal outcomes.

    The crucial question lies in how to elevate living standards. How we can ensure every person, regardless of income level or skin color, enjoys quality healthcare, nutritious food, warm shelter, and all the other amentiies wealthy people take for granted today.

    The answer throughout history has been technology. Because of technology, average Americans today enjoy higher living standards than 15th century kings.

    Today, the answer remains technology, and the technology most likely to raise living standards for all people, not only Americans but humans in every country, is AI. AI is the only feasible way to provide personalized education and healthcare, and to automate the laborious tasks of building homes and preparing food, and to achieve this on the scale of 8 billion people.

  • How should we respond to AI fearmongering?

    Fight fire with water.

    Fight fear with knowledge.

    Invite fearmongers to analyze the benefits of curing disease, offering 99th percentile healthcare and education to all, leveraging smart robots to feed and shelter the poor, and amplifying brain productivity the way machines amplified industrial productivity.

    Then invite thoughtful consideration of the probability of humanity-eradicating AI vs. other catastrophic events like climate change, apocalyptic asteroids, or nuclear meltdowns triggered by falling satellites.

  • Are AI models derivative works?

    No. Many proponents of this argument acknowledge that AI models are transformative in that they can create work that is substantially different and original. The crux of this criticism rests on the claim AI is nothing without primary work. Unfortunately, this is true of every person who learns from, and builds upon, prior generations. No human creates without first studying the past. And in the modern world, learning entails viewing files with a browser, which requires downloading data from a remote server to a user device.

  • Do AI models learn like humans?

    Yes and no.

    Consider how humans learn at a conceptual level. We listen to a teacher expound on a subject. Rather than memorize each word from a lecture, we take notes and extract core principles. We "compress" knowledge, in other words. AI models are the same. They do not copy images or text. They "take notes" by statistically analyzing data for key patterns or ideas. These essential ideas are what get stored, not original content. Like human students, some AI models learn better than others: good AI models excel at identifying and applying core concepts better than poor models.

    However, we still do not understand the precise mechanism by which human brains learn or "compress" knowledge. Our minds are still black boxes in terms of how learning occurs, even if we can map the physical pieces and describe the biochemical pathways. AI models similarly present as black boxes. We can draw detailed and accurate diagrams of AI models, but cannot explain how specific results happen. In this regard, we cannot yet state if AI models and humans learn the same way, simply because we lack information on either side of the equation.

Explore More