Disclosure
These opinions do not constitute legal advice. We routinely update this page to combat AI misinformation and document thoughts about a chaotic, evolving industry.
License
This FAQ can be copied and redistributed for any purpose, provided you link to Hotpot.ai and provide attribution.
Technical Questions
-
Do AI models store images and text?
No.
AI models do not store images or text. It is not possible to open an AI model and find snippets of text and images. Rather, AI models store millions to billions of very small numbers called weights. These weights represent novel statistical interpretations, reflecting the underlying patterns and relationships contained in the training data.
-
Can AI reproduce images and text?
Yes.
AI models can reproduce images and text, not unlike how Photoshop can reproduce the Mona Lisa.
AI models may generate images and text closely matching their training data. The resemblance between outputs and existing content depends on the specific model and the skill of human operators.
The fundamental difference between AI models and image editors lies in work distribution and collaboration. With Photoshop, generation is largely performed by humans. With AI, generation is largely outsourced to software while humans focus on directing.
While the process differs, both AI models and image editors like Photoshop can reproduce images.
-
Do humans and machines both download images when browsing the web?
Yes.
Humans and machines both use computer programs to download images when browsing the web. For humans, this program is the browser, which downloads images from a remote server to your computer or smartphone. For computers, this program is a custom script usually lacking a visual interface.
Notably, the 9th Circuit Court of Appeals ruled that it is legal for computers to scrape public content.
-
Is AI equivalent to lossy compression?
No.
This is like claiming computers are equivalent to calculators, suggesting computers can only add and subtract. AI may regurgitate words and images, but this only reflects a subset of AI functionality.
In humans, breakthroughs often arise from synthesizing ideas across domains and combining knowledge in novel ways. This is a core ability of generative AI, birthing new ideas by connecting old ones.
Debunking lossy compression is straightforward. In an AI image generator, describe a novel scene like the canonical AI image, "An astronaut riding a horse on the moon." Or ask ChatGPT to create a rap about current events in the voice of William Shakespeare.
Creating content that did not previously exist proves that AI is more than compression.
-
Do AI models learn like humans?
Unknown.
Although reseachers can characterize physical components and biochemical pathways, the inner mechanisms by which human brains learn or "compress" knowledge remain unclear. AI models similarly present as black boxes. We can draw detailed diagrams of AI models, but cannot explain how specific results occur.
Scientifically, this question is open until we gather more data on both sides of the equation.
Conceptually, there are striking parallels. Humans listen to teachers expound on a subject. Rather than memorize every word, we take notes and extract core principles. We "compress" knowledge, in other words. AI models exhibit similar behavior. They do not copy images or text. They "take notes" by analyzing data for representative patterns. These special patterns, not original content, are what get stored as tiny numbers called weights. Like humans, some AI models learn better than others: good AI models excel at identifying and applying core lessons better than poor models. And like humans, some AIs can memorize text and images if prodded to do so.
-
How is AI safety defined?
It's not, at least not objectively or universally.
While many organizations publish conceptual guidelines around AI code of conduct, it's not common or even practical to document the detailed tradeoffs that reveal true values.
Should AI align with America, California, France, China, Texas, Greenpeace, the United Nations, or some other entity?
In reality, safety is embraced by all until definitions clash. Heated debates on religion, freedom of speech, taxes, and countless other topics pervade the Internet. Conflicting values is a thorny problem that has bedeviled the human species since the dawn of time. Resolution demands deep thought, honest dialogue, and uncomfortable compromises.
If people cannot even agree on speed limits for cars, imagine the difficulty in agreeing on safety limits for AI. This is not to diminish the role of AI alignment. AI alignment is vital, but the problem is much deeper and more philosophical than many experts represent.
-
Is AI sentient?
No.
AI safetyists and AI advocates are both fallible, one prone to hyping fears and the other prone to hyping abilities.
A wave of sentient claims in 2022 polluted the media due to baseless claims by a Google engineer. These claims were unfounded to anyone with minimal understanding of AI, but unfortunately, many outlets published without rudimentary fact checking.
The restrictions on pre-2024 models, though enacted with pure intentions, also highlight the degree to which AI safetyists may overdramatize. State-of-the-art models in 2023 and 2022 were heavily locked down because of concerns over potential mayhem, but 2024 open-source equivalents not only have not engendered societal collapse, they have unlocked billions of dollars in economic value automating work and stimulating creativity.
As of November 2024, leading AI models -- including ones from Google -- can certainly converse with human-like fluency but still are far from exhibiting the type of independent and self-initiated thought expected of sentient organisms.
-
Are claims of AI extinction supported by scientific evidence?
No.
There is zero evidence of AI leading to human extinction. All claims are speculative.
If someone offers an extinction prediction, ask for their last 100 predictions and 100 predictions for the upcoming year. This exercise will demonstrate the fallibility of their probabilities. Well-researched articles do not quote such probabilities due to their unscientific, wildly speculative nature.
This does not mean ignoring extinction hypotheticals. It is imperative to listen and listen carefully. Listen, but think.
Fear is healthy when grounded in objective analysis.
Society should not limit innovation freedom because of sensationalized fears from fallible experts. The burden of proof lies with those demanding limits on freedom.
If the fear is misinformation, ban web sites. If the fear is bioweapons, ban Google. If the fear is crime, ban cars. If the fear is extinction, ban industralization. If the fear is war, ban the pen and books like Mein Kampf and the Communist Manifesto.
Society would be trapped in the stone age if fear dictated policy.
-
Is it accurate to equate AI with nuclear bombs?
No.
There are two flaws when equating AI to nuclear bombs.
First, bombs are destructive whereas AI is general-purpose technology like software, capable of both good and bad.
Second, in 10-20 years, anyone will be able to develop 2024 frontier models from their home -- unless we also pause algorithm and semiconductor progress, too.
-
Is it accurate to claim hyperintelligent software also gains consciousness?
No.
The first issue is that "intelligence" and "consciousness" are nebulous terms, escaping scientific definition and measurement. Without objective measurement and definition, virtually any type of claim is possible.
The second issue is that the evidence suggests no correlation between intelligence and consciousness -- unless we reclassify cows, sheep, and many "unintelligent" animals as intelligent.
Philosophical Questions
-
Is AI good or bad for society?
Powerful technologies like the computer and the web are general-purpose, meaning humans can abuse them for bad and leverage them for good.
AI is no different.
Debating AI is essential, but it is shortsighted to stop at risks. Engage AI critics with simple thought exercises. Without AI-powered software and robots, how can society:
· Provide Stanford-level tutors to billions of children?
· Offer personal doctors and individualized healthcare to 8 billion people?
· Slash the cost of human services by 90% while preserving quality?
-
Should we pause AI?
No.
We should not pause AI for the same reason we should not pause cars, airplanes, and other forms of industralization despite the non-zero risk of industralization causing human eradication.
Pausing AI may limit harm, but this also limits growth and ultimately hurts jurisdictions prizing safety over progress, much like a driver driving 5 MPH on the freeway while others whiz by.
The curiosity of humankind is boundless. It is naive to believe others will stop exploring and arrogant to think Western countries can halt innovation.
For government officials, history is unequivocal: from Mongolian horses to American computers, those who lead in technology lead the world.
Every country is acutely aware of this lesson and feverishly competing to climb the 21st century's totem pole. Pandora's box is open. There is no choice for America but to simultaneously pursue both AI advancement and AI safety. These goals are not mutually exclusive.
Pausing AI squanders America's tenuous lead and allows other nations to leapfrog us.
Crucially, postponing development postpones the day society can start reaping AI's transformative effects on healthcare, education, and other critical areas.
-
What is the definition of "truth" and "fact"?
The inconvenient truth is that facts are born as opinions. There is no objective algorithm for determining factuality, which is why an alarming amount of misinformation pollutes society. What begins life as an opinion may later gain acceptance as fact, and vice versa. For instance, if two respected organizations report differing inflation rates, which one is fact?
From facts we can build to truth.
Truth, itself, consists of facts and context. Both are essential. Merchants of misinformation skillfully present facts out of context, altering the perception of truth. Most maddeningly, even given a common set of facts and contexts, what happens values conflict? For example, is it tyrannical or heroic to to save the lives of 1 million people by burdening 300 million?
-
AI is dangerous. Is censorship the solution?
AI is indeed dangerous. It is imperative to honestly address systemic risks, including misinformation and bias.
However, censorship is no more the solution for AI than it is for the printing press. The Communist Manifesto and Mein Kampf are lethal books that spawned death and destitution for millions.
Yet we don’t censor printing presses, and for good reason. We educate people on the destructive consequences of communism and Nazism, and we penalize those who adopt tactics to willfully harm others.
Free societies are rooted not in censorship, but education.
The framework is twofold: education and regulation at the human layer. Teach people how to spot misinformation, and penalize those who deliberately harm others. Human problems mandate human solutions.
-
How can society prevent AI harm?
The same way society minimizes harm with the printing press, the computer, and other powerful technologies: at the human layer.
Any powerful technology can be leveraged for good or abused for bad.
Crippling AI to minimize harm also stifles progress, hurting those who would benefit most from AI. Imagine crippling books and computers in a noble, but misguided, campaign to protect others.
The solution is simple and grounded in precedent: regulate AI at the human layer by penalizing malicious users. Do not limit the potential of law-abiding people.
-
Are AI images copyrightable?
The question remains open. Courts have issued conflicting opinions and are examining multiple cases.
However, if users can copyright iPhone photos, produced with the click of a button and powered by AI, it seems consistent with existing law to permit copyrighting of AI images and text.
-
How can we detect invalid research?
Ask three questions: (1) were all potential causal variables analyzed? (2) were positive and negative controls appropriate? and (3) was the sample statistically significant?
Proper analysis must compare differences between groups -- not within groups. Otherwise, the study may conflate correlation with causation. For example, take an investigation into the relationship between candy and muscle growth. Evaluating only professional football players, instead of the general population, may suggest that eating candy enhances muscle generation.
It's also crucial to consider all potential causes. A study on earning capacity, for example, must account for all potential drivers, including but not limited to education, attitude, proficiency, communication ability, interpersonal skills, and risk aversion. The challenge is that many variables are not objectively quantifiable, so studies often omit them despite their importance. For instance, how does risk aversion influence earning capacity for the average college graduate?
While this simple approach does not guarantee validity, it can flag studies with unreliable conclusions because they failed to meet universal scientific standards.
-
Are AI systems intelligent?
Debating "intelligence" is semantic quicksand. Intelligence cannot be objectively defined. Channeling Potter Stewart is unconstructive and merely distracts from the two most pressing questions facing AI: (1) can AI help humanity? and (2) is AI controllable?
Cars don't move like horses and planes don't fly birds, yet these technologies reshaped society and elevated living standards. Conversely, non-intelligent viruses can threaten humanity and shut down the world.
Benefit and control, not human intelligence, are what matter.
The phrase "artificial intelligence" is misleading and spurs endless discussion over amorphous words. Instead, "augmented intelligence" feels more appropriate for the near future and may concentrate discourse on the tangible effects of AI -- that is, augmenting individuals by making each person smarter and more capable.
Even if researchers never develop generalizable intelligence, AI on its current trajectory could still immensely enrich humanity. Consider how many tasks are the equivalent of statistical models, where Alice must do X if condition Y is met with 90% probability. Delegating these baby-blue-collar jobs to machines and allowing humans to focus on what they do best -- create -- will catapult society forward.
-
Which AI voices are trustworthy?
Like the web and every technology boom before it, AI is awash with a flood of cynics and hypesters whose fortunes depend on selling stories over truth.
Everyone has a voice in the modern world. The Internet and social media offer a virtual megaphone where anyone can speak to 8 billion people with the click of a button. This is good and bad.
It wasn't always this way. Reader access and broadcast ability was tightly gated not long ago, reserved only for privileged insiders. While patently unfair, this power structure was not meritless: it increased order and surface harmony while decreasing baseless assertions like a flat Earth.
Technology thankfully swept away this backward era, but generated a new wave of challenges. Who, among the sea of voices, should we listen to? Whose words should we trust?
Ironically, the best people to follow are those who express uncertainty and conflicting perspectives, those who can provide 360-degree viewpoints and argue either side with equal fluidity. They are less entertaining but more informative. Few answers, especially important ones, are 100% certain. Few people are experts on multiple topics, let alone one. Those who communicate doubt are more likely to share counterarguments, not myopic hot takes like this classic one from 1998 predicting, "By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machine's."
The most trustworthy opinions are falsifiable. If a company claims speed breakthroughs, can users measure for themselves? If a scientist claims some cure, can patients verify results?
The second-most trustworthy opinions are objective, accompanied by good-faith counterarguments. Do critics list benefits as exhaustively as flaws? Do advocates document risks as thoroughly as advantages?
-
Should we worry about AI spewing misinformation and bigotry?
Yes, absolutely. Researchers from Google, Meta, OpenAI, and other labs are racing to check AI hallucination and mitigate bias. Hallucination is a tractable problem and should be solved within short order. Bias is thornier and already improving, but comprehensive solutions will take longer.
Ultimately, AI misinformation and biogtry present worrisome risks and must be tackled conscientiously.
However, if the goal is to plug the misinformation and bigotry fountain, humans are a much larger concern. Due to intensifying media competition, pundits are afforded fewer resources and less time to think before sharing opinions, leading to engaging -- but incorrect -- interpretations. The scope of human misinformation is staggering.
For instance, respected organizations and elected officials in recent times propagated misinformation on inflation, omitting or minimizing government actions as causal factors.
While researchers scrub misinformation from AI and engineer safer models, consumers can start to curb human misinformation with one simple question asked of experts, "What arguments falsify this assertion?"
-
Do AI images debase art?
It's tempting to say yes, but consider the clothes you're wearing and those worn by 99% of the population.
Off-the-rack clothes cannot compete with handcrafted ones and do not undermine the craftmanship of tailors. If anything, they increase appreciation and demand for garments painstakingly fashioned by a professional.
Like with apparel, AI images cannot match the creativity and flair of images from elite artists. Trained eyes can readily spot defects with AI art as they can with off-the-rack clothes, but these flaws are acceptable or even unimportant to normal consumers, even if cringeworthy to experts.
It is a mistake to lump AI images with human images the same way it is to lump machine clothes with human clothes. AI images serve a different purpose. They are meant to let the masses express imagination and emotions in an affordable way, similar to how machine clothes let the masses dress and express themselves in an affordable way.
-
Is AI art theft?
No. The training principles of AI art are identical to the training principles of human art. Human artists download images via browsers and learn from them. AI models replicate this process, but use scripts instead of browsers to view images.
The difference is not in principle, but in scale. Computers can study every piece of art ever made whereas humans cannot.
Critically, AI models do not store images. They analyze image patterns and properties and transform these observations into mathematical functions.
-
How will AI impact equality?
There are thousands of critical questions, and it's vital to recognize which ones matter most.
Everyone is equally poor under communist regimes, so this form of equality is clearly undesirable. Conversely, professional athletes are treated unequally, yet this produces ideal outcomes.
The crucial question lies in how to elevate living standards. How we can ensure every person, regardless of income level or skin color, enjoys quality healthcare, nutritious food, warm shelter, plus other amentiies wealthy people take for granted today.
The answer throughout history has been technology. Because of technology, average Americans enjoy higher living standards than 15th century kings.
Today, the answer remains technology, and the technology most likely to raise living standards for all people, not only Americans but humans in every country, is AI. AI is the only feasible way to provide personalized education and healthcare, and to automate the laborious tasks of building homes and preparing food, and to achieve this on the scale of 8 billion people.
-
How should we respond to AI fearmongering?
Fight fire with water.
Fight fear with knowledge.
Invite fearmongers to analyze the benefits of curing disease, offering 99th percentile healthcare and education to all, leveraging smart robots to feed and shelter the poor, and amplifying brain productivity the way machines amplified industrial productivity.
Then invite thoughtful consideration of the probability of humanity-eradicating AI vs. other catastrophic events like climate change, apocalyptic asteroids, or nuclear meltdowns triggered by falling satellites.
-
Are AI models derivative works?
No. Many proponents of this argument acknowledge that AI models are transformative in that they can create work that is substantially different and original. The crux of this criticism rests on the claim AI is nothing without primary work. Unfortunately, this is true of every person who learns from, and builds upon, prior generations. No human creates without first studying the past. And in the modern world, learning entails viewing files with a browser, which requires downloading data from a remote server to a user device.