AI Thoughts

Disclosure

These opinions reflect the thoughts of our founder and do not constitute legal advice. We routinely update this page to combat AI misinformation and to correct our own ideas about a chaotic, ever-changing industry.

Motivation

Since 2019, our mission at Hotpot has been to simplify graphic creation. Vision AI back then was constrained to fancy filters and specific domains like image matting and background removal. Natural language processing (NLP), however, had already experienced profound advances thanks to Google Brain and other innovative labs.

Vision AI finally arrived in 2022.

Thanks to OpenAI, Google Brain, Stability.ai, and others, we now enjoy a world where AI empowers anyone to create beautiful art and images, not unlike how the iPhone empowers anyone to create beautiful photos. This revolutionary technology has naturally garnered worldwide attention -- and scrutiny.

At this inflection point, AI leaders must educate the public about the costs and benefits of AI. Far too often, tech leaders ignore public discourse and focus solely on building the future. This regrettable void becomes filled by cynics who exploit fear to tell stories and create misleading narratives, ultimately inhibiting the progress of essential innovation and trapping society in the past.

Questions

  • Are AI images copyrightable?

    The question technically remains open. Courts have issued conflicting opinions and are examining multiple cases.

    However, if users can copyright iPhone photos, produced with the click of a button and powered by AI, it only seems logical and consistent to allow copyrighting of AI images and art.

  • How should we regulate AI?

    Like any powerful technology, AI can both empower and harm.

    If you're struggling to reconcile the risk-reward tradeoff, how would you regulate the printing press?

    The Communist Manifesto and Mein Kampf engendered the death and destitution of millions, setting back humanity by decades.

  • How do we prevent AI harm without strict regulation?

    The same way we minimize harm with the printing press, the computer, and other powerful technologies: at the human layer.

    By definition, general technology can be leveraged for any purpose and is limited only by human ingenuity.

    Crippling AI to minimize harm also cripples benefits, stifling progress and hurting those who need AI most. Imagine crippling books and computers to contain their destructive potential.

    The solution is simple and grounded in precedent: penalize people who use AI to hurt others. Do not penalize those with legitimate aims.

  • Are AI systems intelligent?

    Fixating on "intelligence" is semantic quicksand. Intelligence cannot be objectively defined, at least not so far. Channeling Potter Stewart is unconstructive and merely distracts from the two most pressing questions facing AI: (1) can AI help humanity? and (2) is AI controllable?

    Cars don't move like horses and planes don't fly birds, yet these technologies reshaped society and elevated living standards. Conversely, non-intelligent viruses can threaten humanity and shut down the world.

    Benefit and control, not human intelligence, are what matter.

    The phrase "artificial intelligence" is misleading and spurs endless debates over amorphous words. Instead, "augmented intelligence" feels more appropriate for the near future and may concentrate discourse on the tangible effects of AI -- that is, augmenting individuals by making each person smarter and more capable.

    Even if researchers never develop generalizable intelligence, AI on its current trajectory could still immensely enrich humanity. Consider how many tasks are the equivalent of statistical models, where Alice must do X if condition Y is met with 90% probability. Delegating these baby-blue-collar jobs to machines and allowing humans to focus on what they do best -- create -- will catapult society forward.

  • Is AI art theft?

    No. The training principles of AI art are identical to the training principles of human art. Human artists download images via browsers and learn from them. AI models replicate this process, but use scripts instead of browsers to view images.

    The difference is not in principle, but in scale. Computers can study every piece of art ever made whereas humans cannot.

    Critically, AI models do not store images. They analyze image patterns and properties and transform these observations into mathematical functions.

  • Is AI just lossy compression?

    Only if you believe computers are just calculators.

    Yes, AI may regurgitate words and images when prodded, but this represents a microscopic fraction of what AI language and vision models do. It's analogous to suggesting computers can only add and subtract.

    Many breakthroughs arise from synthesizing ideas across domains and combining knowledge in novel ways. This is one of AI's most impactful capabilities, empowering people to merge disciplines and manifest ideas more fluidly.

    Today, authors, programmers, artists, and other skilled professionals already use AI to catalyze productivity and creativity in meaningful ways.

    Debunking the compression claim is trivial with an AI image generator. Plumb the depths of your imagination for fanciful, novel ideas then ask AI to render them. If AI is just lossy compression, how can it produce images never seen before?

  • Should we worry about AI spewing misinformation and bigotry?

    Yes, absolutely. Researchers from Google, Meta, OpenAI, and other labs are racing to check AI hallucination and mitigate bias. Hallucination is a tractable problem and should be solved within short order. Bias is thornier and already improving, but comprehensive solutions will take longer.

    Ultimately, AI misinformation and biogtry present worrisome risks and must be tackled thoroughly and rapidly.

    But in 2023, humans are a far larger fountain of misinformation and much greater source of concern. Due to intensifying media competition, pundits are afforded fewer resources and less time to think before sharing opinions, leading to engaging -- but incorrect -- interpretations.

    For instance, organizations and elected officials commonly propagate misinformation on inflation, omitting government behavior and monetary policy as a potential cause.

    While researchers scrub misinformation from AI and engineer safer models, consumers can start to curb human misinformation with one simple question asked of experts, "What arguments falsify this assertion?"

  • AI is dangerous. Isn't censorship the solution?

    AI is indeed dangerous. It is imperative to honestly address the systemic risks of misinformation and bias.

    However, censorship is no more the solution for AI than it is with the printing press. The Communist Manifesto and Mein Kampf are lethal books that spawned death and destitution for millions.

    Yet we don’t censor printing presses, and for good reason. We educate people on the destructive consequences of communism and Nazism, and we penalize those who adopt tactics to willfully harm others.

    Free societies are rooted not in censorship, but education.

    The framework is twofold: education and regulation at the human layer. Teach people how to spot misinformation (see above), and penalize those who deliberately harm others. Human problems require human solutions.

  • Since AI is toxic, we should halt all development. Right?

    This idea is popularly floated when debating general technology with the potential to harm.

    It's thought-provoking and delightfully attractive. But wrong.

    Humankind’s curiosity and ingenuity are boundless. It is arrogant to assume insights are confined to one individual or group. It is egotistical to assume no one else can build something.

  • If not cynics and hypesters, who should we listen to?

    In the modern world, everyone has a voice. The Internet and social media offer a virtual megaphone where anyone can speak to 8 billion people with the click of a button.

    It wasn't always this way. Reader access and broadcast ability was tightly gated not long ago, reserved only for privileged insiders. While patently unfair, this power structure was not meritless: it increased order and surface harmony while decreasing baseless assertions like the flat Earth theory and moon landing hoax.

    Technology fortunately swept away this backward and broken era, but generated a new wave of challenges. Who, among the sea of voices, should we listen to? What words can we trust?

    Ironically, the best people to follow are those often uncertain. They are less entertaining but more informative. Few things are 100% certain. Few people are experts on multiple topics, let alone one. Those who express doubt are more likely to share counterarguments and provide 360-degree viewpoints, not myopic hot takes.

    The most trustworthy words are falsfiable. If makers claim usefulness, do they let people see for themselves? If critics claim uselessness, do they explain why others see utility where they don't? In short, trust opinions coupled with falsfication mechanisms or conscientious efforts to prove themselves wrong.

    Sadly, they don't put people on TV or publish columns of uncertain opinions. Opinions must be lucid and different, even if the data and evidence suggest not forming one yet.

  • What's the easiest way to spot misinformation?

    Ask for falsyifing counterarguments. Proving yourself wrong is the only way to prove yourself right.

    If conclusions stem from a research study, ask three questions: (1) was the sample statistically significant? (2) were all causal variables controlled? and (3) were effects analyzed between a control group and test group?

    An analysis must compare differences between groups -- not within groups. If not, the study may conflate correlation with causation. For example, if you explore the impact of candy on muscle growth but only evaluated professional football players, you may mistakenly believe that eating candy enhances muscle generation.

    Encompassing all potential causes is also required. A study on the drivers of earning capacity, for example, must account for all potential factors, including but not limited to education, attitude, proficiency, communication ability, interpersonal skills, and risk aversion. The challenge is many variales in the real world are not objectively quantifiable or measurable. For instance, is 5 years of software engineering at Google equal to 5 years of software engineering at Acme Company?

  • How will AI impact equality?

    There are thousands of critical questions, and it's vital to recognize which ones matter most.

    Everyone is equally poor under communist regimes, and this is clearly undesirable. Conversely, professional athletes are treated unequally, and this produces ideal outcomes.

    The crucial question lies in how to elevate living standards. How we can ensure every person, regardless of income level or skin color, enjoys quality healthcare, nutritious food, warm shelter, and all the other amentiies wealthy people take for granted today.

    The answer throughout history has been technology. Because of technology, average Americans today enjoy higher living standards than kings from the 18th century.

    Today, the answer remains technology, and the technology most likely to raise living standards for all people, not only Americans but humans in every country, is AI. AI is the only feasible way to provide personalized education and healthcare, and to automate the laborious tasks of building homes and preparing food, and to achieve this on the scale of 8 billion humans.

Explore More