Quotidien Shaarli
May 23, 2024

UX strategist Scott Jenson, who left Google last month, criticizes the company's AI projects as poorly motivated and driven by panic.
Totalement en lien avec mon article d'hier tiens.

Lire surtout le paragraphe "cost and benefits"
Throughout all this exploration and experimentation I've felt a lingering guilt, and a question: is this even worth it? And is it ethical for me to be using these tools, even just to learn more about them in hopes of later criticizing them more effectively?
The costs of these AI models are huge, and not just in terms of the billions of dollars of VC funds they're burning through at incredible speed. These models are well known to require far more computing power (and thus electricity and water) than a traditional web search or spellcheck. Although AI company datacenters are not intentionally wasting electricity in the same way that bitcoin miners perform millions of useless computations, I'm also not sure that generating a picture of a person with twelve fingers on each hand or text that reads as though written by an endlessly smiling children's television star who's being held hostage is altogether that much more useful than a bitcoin.
There's a huge human cost as well. Artificial intelligence relies heavily upon "ghost labor": work that appears to be performed by a computer, but is actually delegated to often terribly underpaid contractors, working in horrible conditions, with few labor protections and no benefits. There is a huge amount of work that goes into compiling and labeling data to feed into these models, and each new model depends on ever-greater amounts of said data — training data which is well known to be scraped from just about any possible source, regardless of copyright or consent. And some of these workers suffer serious psychological harm as a result of exposure to deeply traumatizing material in the course of sanitizing datasets or training models to perform content moderation tasks.
Then there's the question of opportunity cost to those who are increasingly being edged out of jobs by LLMs,i despite the fact that AI often can't capably perform the work they were doing. Should I really be using AI tools to proofread my newsletters when I could otherwise pay a real person to do that proofreading? Even if I never intended to hire such a person? Or, more accurately, by managers and executives who believe the marketing hype out of AI companies that proclaim that their tools can replace workers, without seeming to understand at all what those workers do.
Finally, there's the issue of how these tools are being used, and the lack of effort from their creators to limit their abuse. We're seeing them used to generate disinformation via increasingly convincing deepfaked images, audio, or video, and the reckless use of them by previously reputable news outlets and others who publish unedited AI content is also contributing to misinformation. Even where AI isn't being directly used, it's degrading trust so badly that people have to question whether the content they're seeing is generated, or whether the "person" they're interacting with online might just be ChatGPT. Generative AI is being used to harass and sexually abuse. Other AI models are enabling increased surveillance in the workplace and for "security" purposes — where their well-known biases are worsening discrimination by police who are wooed by promises of "predictive policing". The list goes on.

Source : Next - Flux Complet

Children used to obsessively put CDs and 7-inches on repeat, but streaming means they need digital devices and parental permission to play music. And there’s little being done to help
My daughter is nine years old. When I was her age, in 1989, I had my own small cassette player and a beloved pile of my own tapes – brand new, or made up of songs from the radio – that I could listen to whenever I wanted. The same went for my parents’ modest CD collection (Genesis’s Invisible Touch was awesome; their three Lionel Richie albums were boring). There were a few vinyl records knocking about and there were at least two radios – invariably set to Capital FM – that I could turn on whenever.
My daughter has none of these things. The only way she can access music is by making me get my phone out and play a song on my Spotify account. The inconvenience is trifling, but more painful and alarming is the growing gap between us when it comes to musical experience.

L'IA n'est pas une course aux armements, c'est un concours de prestidigitateurs, face auquel le scepticisme par défaut devient la seule attitude saine.
Les hallucinations sont inévitables. Elles sont une propriété structurelle de ces systèmes. Ça-ne-se-répare-pas. L'industrie le sait très bien.
Les prochains mois seront un (enième) carnage économique. Mais Google s'en fout. Google détient 90, 1% du marché de la recherche en ligne. Google est un monopole. Il ne craint ni la régulation, ni la compétition, ni l'échec commercial. Il n'a même plus besoin que son service soit fiable. Google peut s'auto-mutiler sans sourciller. Le Web sémantique et le Web synthétique se dissolvent dans son Web monopolistique. Que vous le vouliez ou non, Google va mal googler des trucs à votre place sur Google. Google va vous résumer le monde comme il veut. Google va vous conseiller de boire votre pisse pour vous guérir du cancer. Et vous allez écouter, sans sourciller. Ainsi va le monopole.
Quittez Google.