Archive for the ‘Artificial Intelligence’ Category

AI Transparency

2024-01-09

(Atlantic Tech) – Matteo Wong:

The resources needed to build generative AI have allowed the tech industry to warp what the public expects from the technology. …

But that would also be a very narrow way to build and use generative AI. … People might also be willing to sacrifice performance for a more fair and transparent chatbot — benefiting from open AI will require not just redefining open-source, but reimagining what AI itself can and should look like. …

theatlantic ai-transparencyDeep Dive AI

The Human Supremacist Position

2023-12-21

(FTB stderr) – Marcus Ranum:

A fairly common statement regarding AI goes something like: “AI cannot be creative; all they do is re-mix existing stuff probabalistically.”

I characterize that as “The Human Supremacist” position, because it’s implying that there is some sort of “true creativity” which only humans are capable of. There are a lot of problems with that position, which I will attempt to explore, herein. After that, I will describe some of my thoughts on how I experience the creative act, and compare it with how AIs implement creativity. …

stderr the-human-supremacist-positionMarcus Ranum: Generative Adversarial Network

AI Image Generation

2023-10-11

(SMBC) – Zach Weinersmith:

Good day sir. In an effort to make AI image generation more safe, I need to uncover which images humans consider sexual and which they do not.” …

smbc generationZach Weinersmith: SMBC 2023-10-07: Generation

AI Vincent van Gogh in Paris Exhibition

2023-10-03

(Guardian Art) – Kim Willsher:

The Musée d’Orsay adds AI and VR to the display of the artist’s last works, never previously seen together. …

theguardian ai-vincent-van-goghVan Gogh self-portrait at exhibition

So Much for ‘Learn to Code’

2023-09-27

(Atlantic Tech) – Kelli María Korducki:

In the age of AI, computer science is no longer the safe major. …

theatlantic computer-science aiCS Graduate

AI Image Can’t Be Copyrighted

2023-09-26

(Guardian Tech) – Edward Helmore:

The use of AI in art is facing a setback after a ruling that an award-winning image could not be copyrighted because it was not made sufficiently by humans.

The decision, delivered by the US copyright office review board, found that Théâtre d’Opéra Spatial, an AI-generated image that won first place at the 2022 Colorado state fair annual art competition, was not eligible because copyright protection “excludes works produced by non-humans”. …

theguardian cant-be-copyrightedThéâtre d’Opéra Spatial

Don’t Fear the Singularity

2023-06-16

(OnlySky) – Adam Lee:

AI futurists assume that faster thinking automatically produces greater intelligence, leading to a Singularity of transcendent machine minds. However, thinking speed is the less important half of intelligence. We should remember the failure of past predictions and have some humility about what the future holds. …

alee our-ai-futureELIZA conversation

Reverse Turing Test

2023-05-09

(Existential Comics) – I had ChatGPT give me an actuall “Reverse Turing Test” and you’ll be happy to note I easily passed as a computer …

existentialcomics 497Existential Comics 497: Reverse Turing Test

Bing Is a Trap

2023-05-07

(Atlantic Tech) – Damon Beres:

Tech companies say AI will expand the possibilities of searching the internet. So far, the opposite seems to be true. …

theatlantic microsoft-bing consolidationELIZA conversation

Concern Trolling

2023-04-14

(Starts With A Bang!) – Ethan Siegel:

Concern trolling is basically when you pretend to care about an issue in order to undermine and derail any measures that would be taken to address the actual, underlying problem that’s affecting society. Instead, it’s a tactic that’s used to:

  • trigger infighting among the groups/people who are actively working on the true problem,
  • reframe the argument to try and take power away from an actual, effective movement,
  • and to flood the public discussion with noise and distraction,

often with tremendous success.

This is a particularly fruitful tactic when the seeds of fear are already present, as it’s extremely easy to take an already fearful (sometimes, justifiably so) group and point them towards an imagined boogeyman, causing them to attack and fixate on a completely extraneous, possibly even non-existent problem instead.

When someone attempts to make you afraid of something that hasn’t happened instead of a true, present danger, suspect this nefarious ploy. …

starts-with-a-bang concern-trollingThinking AI

Computer Hallucinations

2023-04-11

(OnlySky) – Adam Lee:

Big Tech companies eager to find the next big thing have latched onto AI chatbots—but they’re racing ahead of what the tech can actually do. …

LLMs are a clever technology, capable of startlingly human conversation. But their critical flaw is that they don’t have any concept of truth.

There’s no truth-seeking algorithm in them. There’s no process of reasoning, or weighing evidence, or applying logic. All they’re doing is stringing words together according to a probabilistic model. They draw no distinction between sound arguments and glib nonsense. When they do output correct information, it’s only a coincidence, not anything fundamental to their design. …

alee ai-isnt-an-oracle-of-truthELIZA conversation

Neither Artificial Nor Intelligent

2023-04-01

(Guardian Comments) – Evgeny Morozov:

Let’s retire this hackneyed term: while ChatGPT is good at pattern-matching, the human mind does so much more. …

theguardian chatgpt-human-mindELIZA conversation

Offensive AI

2023-03-19

(SMBC) – Zach Weinersmith:

Later, the robotic war on humans was surprisingly selective. …

smbc offensive-aiZach Weinersmith: SMBC 2023-03-14: Offensive AI

AI Search Is a Disaster

2023-02-16

(Atlantic Tech) – Matteo Wong:

Microsoft and Google believe chatbots will change search forever. So far, there’s no reason to believe the hype. …

They announced that they would incorporate AI programs similar to ChatGPT into their search engines—bids to transform how we find information online into a conversation with an omniscient chatbot. One problem: These language models are notorious mythomaniacs. …

theatlantic search-engine-chatbotsELIZA conversation

AI as a Tool for Elevating Mediocrities

2023-02-16

(FTB stderr) – Marcus Ranum:

AI are going to have (for now) a problem with being right about things, in fields where there is an objective criterion for right or wrong. An AI cannot bloviate about the nature of dark energy or the volume of a rotational curve: there are verifiable answers. But I hypothesize that fields lacking objective criteria are going to be very easy for AIs to fudge responses into: marketing, psychology, philosophy, MilSF, screenplays, presidential speech-writing, on and on. A friend of mine and I have been playing with asking ChatGPT to write code for us, and have been enumerating the critical mistakes it makes. Perhaps ChatGPT can design a website that looks interesting, but when I asked it to write a C function that added a struct Node to the end of a linked list, it appears to have assumed I meant a struct Node * and accessed its member fields via pointers (instead of allocating memory and initializing a copy) C has objective criteria for good code and bad code, but speechwriting and MilSF do not. …

AI can be used as a tool for elevating mediocrities and, as such, it’s not just a threat to the mediocre, it’s a source of infinite noise. …

stderr in-the-crosshairsELIZA conversation

No Singularity

2023-02-11

(New Yorker Culture) – Ted Chiang:

We fear and yearn for “the singularity.” But it will probably never come. …

newyorker why-computers-wont
Hitchhikers Guide to the Galaxy Deep Thought