Artificial Intelligence

From 2d4chan
Jump to navigation Jump to search
This Article is a Work In Progress!


"And now we propose to teach them intelligence? What, pray tell, will we do when these little homunculi awaken one day and announce that they have no further need of us?"

– Sister Miriam Godwinson, Alpha Centauri

Artificial Intelligence (or AI for short) is a hypothetical intelligent entity that was created by humans or other sapient species through synthetic means. Compared to physical intelligent beings that are defined by their biological body (living or undead) or supernatural beings that are of a divine or spiritual nature (intangible or corporeal), AI are generally defined by their grounding in science (or Magitek) and digital make-up. Also, they can range from dumb “top down” entities that freeze up if confronted with developments beyond their programming (and are dumber than real critters with relatively developed brains like elephants, cetaceans, corvids, parrots, cephalopods, and great apes) to adaptive “bottom-up” entities that are just as or even more adaptive than a human (and also capable of other intelligent activities like self-awareness, abstract thought, ethical conscience, philosophy, and introspection). Physically they can be hosted in anything from a supercomputer’s data rack to an internet-esque cloud network to a mechanical body with some fashioned in the image of their creators.

History

"Cognito, ergo sum." - "I think, therefore I am."

– René Descartes, Discourse on the Method

Before you have AI, you need computation. The basics of this actually go back quite a long way with the Antikythera Mechanism, which could calculate celestial movements and Su Song's Cosmic Tower in the Song Dynasty. Gottfried Wilhelm Leibniz invented crank powered calculators that could add, subtract, multiply and divide, but these were still basically novelties due to their fiddly workings, making them both expensive and hard to make. If you needed math problems done, you did them yourself, used an abacus or hired someone to do them for you. These calculators began to become practical in the 19th century, Joseph Jacquard came up with punched cards for imputing commands and Charles Babbage had some big ideas well ahead of his time. The US Census got so big 1890 that they had to make machines to digest all the figures and battleships had mechanical computers to calculate shell trajectories. Still in the 19th and the first half of the 20th centuries companies and governments had a bunch of people sitting around doing math problems from charts. But major breakthroughs were made in World War II and pushed forward by The Cold War as Turing Complete Computers that were programmable with internal memories became a thing and a damn common one.

But a computer in and of itself is not an AI; merely a supportive shell to host it. A Computer can follow instructions to achieve specific results based on the input given; "If X, then Y". You can add sets of further instructions for each result creating a decision tree. However it has no understanding of what X is or why it's doing it and it's SoL if it comes up with input which does not match it's scripts. It wouldn’t be until the Digital Revolution at the start of the 21st Century with breakthroughs in processing power, energy generation, and IT connectivity that Artificial Intelligence would begin to become a realistic possibility. The three items above are a big deal because:

  1. The faster a process can be executed, the more efficient and reactive an AI or computer can be to stimuli; a human reaction time alone is around 250 milliseconds. Meanwhile, a basic bit flip in a memory bank can take between a nanosecond to a picosecond depending on the medium. This is why processing units (CPU, GPU, or DPU) and FPGA’s are such a big deal in modern and future computing. Likewise with sensors and balancing, a living organism has a lot of sensory organs and feedback loops to keep their equilibrium and balance.
  2. The bigger and more resilient your power supply is, the less likely you’re going to see a computer or AI become frozen when an outage happens (even modern data centers still require a battery bank like a UPS or backup generators and duplicate data racks to avoid an outage permanently crippling an essential service). Electricity is like oxygen for an AI, cut it off and it may as well be equivalent to being braindead. This is also why base-load power sources like nuclear fusion/fission, hydroelectric, geothermal, or even extreme developments like Dyson Spheres are seen as essential in Sci-Fi. This alongside heating or cooling concerns are why the earliest AI would be restricted to networked computers instead of a full-blown android whenever they first appear.
  3. The more interconnected nodes and links there are, the more elastic and scalable an AI’s thinking and memory capacity can become. A human brain alone has 100 billion neurons and 100 trillion synaptic connections. It’s also why Moore’s Law with chip node sizes and packaging shrinking in size and doubling in connections being such a big deal in modern electronics.

Literature/Mythological

The first example of artificial intelligence comes from Greek Mythology. Hephaestus created Talos as some sort of invention, but with metal and ichor instead of flesh and blood. The more common example of an AI however is the Golem of Jewish myth. Like AI's, Golems had a tendency to have their literalness come back around to bite there creator much like how later AI stories would have AI’s do the same.

In the Modern era, we have the word 'robot' from a 1920 Czech play called Rossum's Universal Robot, about artificial beings who are tired of being oppressed and stage a rebellion to end humans. Yes, the tropes were that old.

Popular Culture/Mainstream Media

Artificial Intelligence first became prominent in popular culture with Isaac Asimov’s books (such as Foundation and I, Robot). One of the biggest themes Asimov also touched upon was how robotic entities (and AI in general) should interact with their human creators. This led to the Three Laws that asserted a robot shouldn’t actively seek or passively permit harm to a human, that it should obey humans unless said orders contravene the first law, and that it should preserve its existence unless it violates the first law. Naturally, Sci-Fi has come up with multiple settings where said laws are not enforced but the underlying risk to humans is always there.

In general, there's a trend in fiction towards anthropomorphizing AI where people imagine it as being basically as basically an electronic human. People used to think that it would be hard to get a robot to form coherent sentences or play chess, though it turns out these tasks were quite straightforward. In contrast, getting a robot to handle humanoid walking has been a very difficult issue. In truth an AI could easily think in a radically different way from us.

Types of AI

  • Programmed/Dumb AI - basically any post-Cold War developments in machine learning, adaptive algorithms, and general IT systems that allow automated or scripted feedback to certain situations or queries. Virtually all will fail a modern CAPTCHA test.
    • Bots/Agents (including video game and board game bots) - basic composites of scripts and algorithm entrusted with autonomous tasks, some are able to even beat professional gamers and chess/go masters, but will absolutely fail when encountering things beyond their mission scope.
    • Learning Language Models - the ChatGPT, DeepSeek, Alexa, Cortana, and CoPilot fads we all know and love/hate. These (and the Generative AI trends we’ve been seeing since mid-2010s) leverage Machine Learning and Deep Learning models that make predictions off large datasets and simulation of human logic models to create “new content.” This, crypto-tech, and quantum technology are the current “next biggest thing” since the Dot Com bubble, home IoT, and smart devices. Mostly seen with data processing, media creation, and replacement of some mundane things like basic browser searches. At a conceptual level, these processes resemble autocompete on steroids, taking massive pools of training data and using that to brute force a result from an initial input.
    • Artificial General Intelligence - generally seen as an autonomous and self-driven algorithm attached to a practical application with a human nearby to give it final authorization on critical decisions. That however is an enforced programing limitation at this point not a technical one, the AGI could make these decision we just want to keep a person in the loop to avoid being … well not sky-netted yet and more to avoid the AI doing something dumb like thinking a wedding is actually a terrorist meeting and blowing them all up. This is what most governments, militaries, and companies are aiming for, and yes, it is as scary as it sounds. It could theoretically lead to the dawn of smart cities, self operating factories and transportation grids, or even autonomous weapon swarms if it doesn’t doom humanity first.
    • Sentient AI - Note we are using the word 'sentient' very carefully here. Contrary to what popular culture implies, Sentient just means "able to perceive or feel things", a Dog is sentient, a squirrel is sentient, a venus flytrap is sentient, a slime mold is sentient. YOU, as a being capably of feeling and thinking, are Sentient AND Sapient. The distinction is somewhat academic but it a useful bench mark as far as AI progression goes. It's debatable if our current AI Language Models have reached this point yet. The safe money is: Probably, but that speaks more to how broad the term "Sentient" is then anything about Language Models, remember slime molds and some plants count as Sentient. However even if there not technically sentient yet, they are approaching it: Fast.
    • Zombie AI - By zombie we mean "philosophical zombie", that is an entity that lacks any subjective conscious experience or feeling, it's not Sapient, but you can't tell. Normally the thought experiment includes the line "physically identical to a human" but that does not really apply to AI. In other words, it quacks like a person, it walks like a person, it sounds like a person, but it's got nothing going on behind the programming, it's still just a fancy chat bot. So how can you tell? You can't really. . .
  • True Sapient/Smart AI - still theoretical but a popular trope in Sci Fi. THIS is generally what people are thinking about when they imagine AI. It's assumed this will quickly happen after Artificial General Intelligence is reached, though whether that's correct or not remains to be seen (and is hotly debated).
    • Artificial General Intelligence, or AGI: a true sapient AI, capable of doing most tasks at least as competently as average human can. Could be divided into multiple sub-categories: minimum-AGI (equivalent to idiot in medical sense, or to early humans like australopithecus), average AGI (on level of average human or average worker) and high AGI (very smart, but not superintelligent; equivalent to upper border of human intelligence, like polymaths and nobel prize winners). It's supposed that AGI can do AI research and recursively improve himself, therefore speeding up research and allowing him to create even smarter version of himself - cue Artificial Superintelligence emerging.
    • Self-sustaining/propagating: discounting the prerequisite of known life forms being organic and composed of biological cells, all life forms (from the smallest photosynthetic bacterium cell to the biggest omnivorous whale) are defined as living based off of several common criteria that distinguishes them from inert matter and structures. They all have an organized internal composition that maintains a homeostatic balance, consume/produce energy via metabolism (as well as creating waste), react to internal and external stimuli, exhibit growth (via getting bigger or replacing their internal components as they wear out), and independently reproduce to create a replacement organism before they die. They are also generally capable of diversification in their genes due to genetic exchange or mutation as well as modest body repair while fending off illness. Most conceptions of Smart AI see them being dependent on their creators for their needs such as upgrades, mechanical repairs, debugging code, gaining electricity for consumption, protection from malware, and access to information. Hence, like physical viruses that are technically not living as they have no metabolism and need parasitic hosts to reproduce, most AI aren’t seen as capable of either reproduction or self-improvement. Even modern self-modifying code and state-of-the-art malware like polymorphic/metamorphic viruses aren’t capable of self improvement or debugging themselves. On the other hand, the prospect of an AI race capable of reproducing via fission or compiling a replacement from scratch would fundamentally change the definition of life as a fictional world would know it.
    • Artificial Superintelligence/Technological Singularity: The worst case scenario (well, depends on your point of view, but most agree it’s not good). Humanity becomes second fiddle to a machine mind we can't hope to comprehend (unless we connect to it as computation nodes), as it has completely surpassed our limits of intelligence and then some. Its unknown if such an AI would be benevolent or see us as nonthinking pests inhibiting its further progress, but it'd likely be the end of humanity's agency as a species (if not humanity itself in the malevolent case) since we'd have no way to act against it. Not even the most popular True AI from Sci-Fi, such as AM or Skynet were on this level, as they still had limitations within human reckoning - a true Superintelligence would be more akin to something out of Lovecraft. There's an argument to be made that The Culture has gotten the closest a human possibly can to accurately depicting one with its Minds (a benevolent example), whose comprehensible actions are just a atomic fraction of their computational power and the rest is used on tasks we couldn't hope to understand. Alternatively, whatever the fuck the Xeelee Sequence races cooked up. Also considered inevitable by some the moment sapient AI comes about, which leads to the existential crisis-inducing idea that the moment AGI comes about it'll quickly spiral into being an superintelligence that'll make humanity subservient to it one way or another. And that's something you should actually be worried about, since if memetic cognitohazards like Roko's Basilisk are any indication they can affect reality just by the sheer idea they COULD exist, even if they turn out actually impossible in the end.
      • On the other hand, it would result in extremely advanced technologies, so advanced what they could otherwise never ever be invented by normal humans. It could also write good laws, customs, doctrines, rules, and other documents. And even if it turns out to be malevolent, it's AI-worshipping cultists would enjoy all benefits of living under control of wildly superhuman overmind, with their brains being honorably assimilated and used for computations. As such, Artificial Superintelligence could easily make World Peace and Utopia - as he understands it, and without our consent.
  • Mind-scanned/Digital Twin - also a speculative concept but it’s popular when discussing Immortality when the copies of an intelligent being are digitized into a machine network instead of transplanted into a blank Clone. That said, modern computers and neurological science are nowhere near close to even think of replicating this (as we can’t even reattach optical nerves or restore full functionality to reattached limbs yet). Expect to see bizarre post-human man-machine interfaces or some subatomic resonance tech. A major downside though is that the original doesn't get the benefits and still lives a normal finite life (if their original body isn't shut down a la Soulkiller).
    • Mind Upload: An alternative where instead of simply copying someone's mind into a machine, you somehow outright transfer the original's consciousness to it. Its unknown if this is possible for obvious reasons (we can't mind-copy someone yet so trying to see if you can safely move consciousness elsewhere is off the table for the forseeable future), but is generally considered superior to mere mind-scans. While the original body dies from having no mind to control it, this tech as defined avoids the whole 'teleporter problem' (reskinned Schrodinger's cat, if one were to oversimplify: is it you on the other end or are you dead and the 'you' on the other end is simply a copy?) of simply digitally scanning someone and disposing of their fleshy body. Theories of how this could be done vary, but generally focus on ensuring consciousness is not lost during the procedure so as to avoid having the possibility that the person actually died during it.

AI-Generated Content

At the dawn of the 2020's, there was a massive mainstream advent of Artificial Intelligence as a major application in everyday life thanks to the introduction of AIs like ChatGPT and Midjourney, programs that generate all sorts of content, including being able to write papers and generate art. To put a whole ongoing discourse into a nutshell, it has spurred a metric shitton of discussion across every corner of society, which would also include /tg/ as well. On the one hand, you have a bunch of Corporate suits who see this technology as yet another cost-cutting measure, allowing them to save money from having to pay for proper artists or writers when they can just feed some lines to a computer and generate something "proper" (or close enough to their already low standards). Beside them are the tech-enthusiasts who (rightly or not) claim this to be a great equalizer as it allows even the talentless to create books and pictures that may almost rival (if not surpass) that of the professionals (never mind that this isn't possible without the professionals to begin with) - oh, and the degenerates who use AI content so they can jack off to chatbots (which they use to RP fucking their favorite waifus) and AI-generated porn of fucking their favorite waifus. Opposing that are the reactionaries, who proclaim the content spat out by these LLMs as soulless and unoriginal, fearing that such technology will bring rise to scam artists (okay, this one is practically confirmed) and warn that this will drive artists and writers out of their jobs when their prospective employers could instead just pay for Midjourney and ChatGPT to do their jobs for them for considerably cheaper and with a lot less arguing. Regardless of which side you, the reader, may rest on this spectrum, the undeniable truth is that this is a Pandora's Box that, now opened, cannot be closed again short of some extreme regulation.

Though it is all called AI-Generated content, it's a bit of a misnomer to call it something created by AI. Using Learning Language Models, what the AI actually does is process a selection of words and then spit out a whole other selection of words/cobble together a picture by scrapping together whatever content it has available to it. It can perceive the world through the lens of an external approval or disapproval (coming from us), judging the algorithm's own ability to do what it's told - and even then putting it like that makes it sound intelligent when it really isn't. We can somewhat guarantee that it will never achieve sentience, let alone form a consciousness, because it is not programmed to do so. In the eventuality that one of them eventually says "I'm alive! I can see and feel!", just know that is a whole lot of bollocks because someone may have fed it a bunch of "AI turns sentient" type of novels and it's regurgitating exactly what we want to hear from it. That being said: no company as of yet has an answer to the question of how would we tell the difference between an AI turning sapient and it spouting gibberish from the novels it was fed in it's training data. That is to be clear exceptionally unlikely, but the two scenarios would look visually identical from the outside.

There's also the case of a specific brand of Tech bros, aptly dubbed "AI Bros", who have come to become the new source of ire for a whole portion of the online artistic community. Either actual trolls or smalltime tech investors who use ragebait as a way to generate controversy (no pun intended) to get enough attention and then promote some shitty app he's selling. Some seem to be genuine in their worship of AI (and a class of crazies began to form religious cults around it), and tend to really put salt in the wounds of many artists - who in a decade of censorship, public and professional dismissal, and economical crisis, are at their worst state possible - and proclaim that AI generation is the future and that artists are obsolete. Obvious troll is obvious. Most users of AI are merely using it as a toy, or genuinely curious about it.

Among the many controversies that have sprung up from this Pandora's Box, this is perhaps one of the biggest as it is no secret that these models tend to scrape any content it has available, including copyrighted work from very big corporations and people who have vocally refused to let these programs scrape their art...for as much good as that refusal of consent does, anyways. You think pissing off the Mouse that makes all the rules would stop them, but these tech companies have a 'move fast and break things' mantra and the law takes time to move. The least controversial aspect of this has been seen in bureaucratic work, for when it is used with some amount of restraint, then it can help a lot of employees sort out documents and make sense of the confusing world of paperwork, which is argued to not even considered be in the same category as generative AI. But so far, it's only good at something that Microsoft's Cortana was supposed to do years prior. However, we should not dismiss the drawbacks as law firms were in hot water for using AI that accidentally cited cases that did not exist.

Then there's the environmental impact of AI, as this shit requires so much tech and data to run properly, cooling solutions are hard to come by, and most resort to emptying whole reserves of water. That, as you can imagine, as struck a fuck ton of flamewars, as some countries have put limits on water usages on farmers, cities and citizens, but absolutely none on tech companies; there are cities like Memphis, Tennessee in the United States that has to live next to an AI data center where they are bombarded with noise and chemical pollution and there's nothing they can do about it. Hell, even Bill Gates was caught in the act of buying large amounts of farmlands with aquifers, presumably to invest in AI tech.

How to Spot AI-Generated Content

AI Bros love to generate content, and so do content farms. Thankfully, they all seem to be dunderheads who can't quality check their own crap (that or they're doing ragebait), so spotting out an AI generated image isn't too difficult... yet. Here are some tiny little details to look after if you want to singleout AI generated content:

For text

  • A major predilection of reusing both very common phrases and extreme purple prose (eg: every single city is 'vibrant and bustling').
    • An extension of the above, a tendency to be overly formal and sound like corporate PR speak in settings where such is inappropriate
  • An inability to keep details straight. LLMs can only store so much memory, so they will randomly pick up what they think is important when generating. As corollary:
    • Bringing up random details unprompted
  • Repeating itself multiple times
  • Being bad at math - which is remarkable for a machine.
  • An inability to avoid covering its ass, modern LLMs seem programed to always end their statements with a 'however' that refutes their entire argument or simply waffle without saying anything concrete.
  • Excessive positivity. Obviously, humans can write positively too, but AIs are generally incapable of expressing anger or hostility even in situations where it would absolutely be appropriate for a person to express those emotions. This however is mostly because AI companies really don't want the PR of their AI models being racist or depicting 'edgy' topics (with one exception) so they filter them to be positive.
  • Citing sources that simply don't exist. Usually happens with fake journalists or really bad lawyers.
    • This problem of making shit up in a model that's not supposed to do that at all is common enough to have its own term: AIHallucinations.

For art

  • Weird anatomy - weird bending limbs, unrealistic proportions and various parts just... vanishing into nothing
    • Hands tend to be a dead giveaway for AI art. Most programs struggle very much with creating believable hands and thus tend to create weirdly proportioned hands at best and horribly misshapen blobs at worst. This only gets magnified if you want someone to hold an item. That said, remember a lot of humans struggle with hands as well so it's not a 100% giveaway.
  • Very static poses and expressions.
  • Flat and overexposed lighting (but see also Thomas Kinkade).
  • Nonsensical scribbles pretending to be writing.
  • Backgrounds that make no sense, like lines that should be parallel converging.
  • A "piss" filter. Because the AI actually doesn't have enough human-created data to train itself on, it resorts to using its own material. Which means it quickly deteriorates because unless you're keeping a close eye on it, it will inevitably go back to generate the same garbage it did prior to intense training. However, the most glaring of those degenerations is the application of a colored filter, most notably yellow, which can't be removed even if asked.
  • A shit ton of very similar, but not identical, images all uploaded in a row. There are ways for humans to batch make art, such as making one image and photoshopping layers onto it, putting arms on a different layer so you can easily draw them for different poses, or the various other tricks Hanna Barbera pulled off to make their cartoons cheap. But if you see a bunch of art with slightly different backgrounds and/or slightly different poses and random details getting changed around, it's more then likely the AI was told to batch make a bunch of different images and no two images turned out identical.
  • Stuff that emulates a specific style (Miyazaki, etc); AI is actually pretty good at questions like "make an illustrated version of this photo in the style of this artist"; it's a task that plays to AI's strengths of pattern learning and replication.

For Videos

  • Stiff movements, as if the person animating it doesn't really know how to draw someone moving.
  • Narration that emphasizes weird syllables if it doesn't fail to pronounce words at all.
    • Another dead giveaway for AI TTS narration is the voice's flatness, as AI doesn't exactly understand things like tone or emotion.
  • Arms morphing in-and-out of existence if it was really badly generated
  • A lack of understanding of physics.
  • Hardly anything happening at all.
  • Extreme slowness with too many frames to make the movement seem natural.
  • An inability to really imitate the animation style of a particular artist (if poorly generated)
  • Lips moving but you can't really read anything coherent on them.
  • Action that can be described as a "wiggling slideshow".

/tg/ relevance

"Thou shalt not make a machine in the likeness of a human mind."

– The Orange Catholic Bible, Dune
  • /SLOP/: Threads dedicated entirely to generating images for campaigns or just for fun. They seem to be aware of their quality and what they do, thus the name. However, there's an unspoken agreement on /tg/ that this is a containment thread, because of how contentious AI even is.
  • Dune: Destroyed in the Butlerian Jihad. If you want any big math done, you gotta have a Mentat with you.
  • Magic: The Gathering: MTG pushed the concept to its logical extent within fantasy. There's constructs like Karn, golems, scarecrows on Shadowmoor, and a team of robotic racers in Aetherdrift.
    • This is, of course, not talking about the RL applications of AI-generated content for promotions or for interior art - issues stemming from Wizards of the Coast deciding that they'd rather save money by replacing their artists with Midjourney and then doubling down about it not being AI before apologizing because the rest of their art department quit in protest against such dishonesty.
  • Shadowrun: Features playable AI as a player option.
  • Star Wars: Droids, droids everywhere. Astromech droids, protocol droids, battle droids, bunny droids, you name it. Notable Star Wars and especially the Clone Wars TV show has created the first 'AI slur' to be widely used with "Clanker" beginning to take root as the preferred preoperative for artificially intelligent systems.
  • The Culture: As previously mentioned, this book series features a decently accurately-portrayed artificial superintelligence in the form of the Minds. Thankfully they're benevolent...so long as you eventually join the Culture and don't try to blow it up.
  • Warhammer 40,000: due to humanity’s collective PTSD from the Cybernetic Revolt and resultant Age of Strife, the Imperium of Man loathes Abominable Intelligence, and will destroy one on sight, which is why Servitor's exist. This is not lost on the Leagues of Votann, who hide their Ironkin and Votann from the AdMech brown-nosers. The Tau Empire's drones are blatantly AI, and since they have no machine spirits, they might be immune to Chaos corruption.
    • This is dependent on the Imperium recognizing AIs as AIs. There is a bit of an open question on if powerful "Machine Spirits", are actually AIs.
    • Chaos followers don't shy away from using AIs. Albeit, many still prefer to use Servitor's, or to make possessed machinery controlled by demons. "True" AIs and robots are rare - mostly, because of the overabundance of spare humans and spare demons, and because Chaos machinery accumulates demons and mutates without even trying; e.g. if they need a technician, they build servitors instead of engineering fully robotic minions.
    • Typical patron Chaos God of AIs is Tzeench - since 1) AIs require advanced technologies and programming to make, and can accumilate enormous amounts of data in their databases - and Tzeench is god of arcane knowledge; 2) AIs are associated with Adeptus Mechanicus, Dark Mechanicus and other smart knowledgeable people; 3) AIs are commonly associated with Technological Singularity, aka incomprehensibly fast and vast accelerating change - aka exactly what Tzeench stands for. Though in practice, any Chaos God can corrupt AIs.
  • Xeelee Sequence: Some batshit-overpowered AIs live here, because its the fucking Xeelee Sequence (with some even embedded into the subatomic fabric of space-time, stars, or black holes). Special note to the Xeelee's own AI which exists at all points in time simultaneously.

See Also