Money Is Not Wealth - By A.R. Miller

MONEY IS NOT WEALTH


Note- Items marked "Relocate" and not yet linked in to main MINW pages:
Subsection 14 of Money Is Not Wealth.


Articles to EDIT, DATE and RELOCATE:

Princeton University: Princeton's Breakthrough Qubit Could Finally Make Quantum Computing Practical. (SciTech Daily, November 23, 2025)
Princeton engineers extended qubit lifetimes using a new tantalum-silicon design that sharply cuts energy loss. Their super-conducting qubit remains stable for three-times longer than the strongest versions available today. The improvement could enable large, stable quantum processors capable of real-world problem solving.
The team explained that their qubit uses an architecture similar to the systems developed by Google and IBM, making it compatible with existing processor designs. Houck said that replacing parts of Google's Willow processor with Princeton's components could make it operate 1,000 times more effectively. He added that the advantages of the new approach grow even more quickly as more qubits are added, increasing the overall impact in larger systems.
["Three times longer" = remains stable for 1-millisecond. A big step, but hardly the final one.]
Marc Bekoff & Koen Margodt: The Indomitable Jane Goodall (Nautilus, October 2, 2025)
Reflections from those who knew the primatologist best.
Maureen L. Sullivan: Officials Re-Dedicate Natick Center T Station, Pedestrian Bridge. (photos; Natick Report, August 11, 2025)
On a sunny and hot afternoon on August 11, town and state officials re-opened the Natick Center MBTA station after nearly six years of construction - stretched out, in part, due to the pandemic and supply-chain issues. The site includes handicapped-accessible elevators to and from the platforms, as well as a new pedestrian bridge rededicated to Richard Walker, a well-loved letter-carrier who died in 1994.
[And, Real Soon Now, a platform-level connection to the Cochituate Rail Trail...]
The original Natick Center station opened in 1897, after a 2-1/2-year project to depress the tracks 30 feet. The station building was torn down in the late 1950s. The pedestrian bridge was created in the 1990s, from a portion of Walnut Street that connected Walnut to North Main Street.
Bobby Borisov: Mozilla Thunderbird 140 "Eclipse" Open-Source Email Client Lands With Experimental, Native Exchange Set-Up, Adaptive Dark Messaging, And More. (Linuxiac, July 7, 2025)
Mozilla has just unveiled Thunderbird version 140 "Eclipse" of its widely-adopted, free and open-source desktop email client, now available for download, as the new Extended Support Release (ESR) that replaces last year's "Nebula" line.
For two decades, Thunderbird users who needed to connect to on-premises Exchange servers or Exchange Online accounts where IMAP had been disabled were directed toward paid add-ons, such as ExQuilla or Owl, or advised to use Microsoft Outlook as an alternative. But no more. By embedding an experimental Exchange Web Services (EWS) engine directly into the core codebase, "Eclipse" removes that barrier and opens the door for:
- Enterprises locked into on-prem Exchange.
- Hybrid Office 365 environments with IMAP/POP shut off, but EWS still available.
- Future work on mobile. The same Rust-based EWS crates will be shared with the forthcoming Thunderbird for Android, ensuring a unified protocol stack across desktop and phone.
Although Microsoft plans to block third-party EWS access to Exchange Online on October 1, 2026, the protocol will persist indefinitely for self-hosted servers. Thunderbird's engineers argue that shipping EWS now, will provide a solid springboard for a Microsoft Graph implementation later.
NEW: Ryan Whitwam: Android 16 Review: Post-Hype (Ars Technica, June 30, 2025)
The age of big, exciting Android updates is probably over.
Steven Vaughan-Nichols: What Did Tech Titans Linus Torvalds And Bill Gates Talk About, In Their First Meeting? Mark Russinovich, Microsoft Azure's CTO, Hosted A Dinner Where Torvalds And Gates Met For The First Time. (ZDNet, June 24, 2025)
Boy, do I wish I had been at this dinner! For decades, Microsoft and Linux fought like cats and dogs. However, while the conflict has cooled down, and Microsoft loves Linux these days, the two leaders, Microsoft founder Bill Gates and Linux creator Linus Torvalds, had never met… until now.
Mark Russinovich, Microsoft Azure CTO
, decided it would be neat if he could somehow get the pair and Dave Cutler, the man who led the development of VAX/VMS and Windows NT, together for a meal. And so it was, as he wrote: "I had the thrill of a lifetime, hosting dinner for Bill Gates, Linus Torvalds, and David Cutler. Linus had never met Bill, and Dave had never met Linus. No major kernel decisions were made, but maybe next dinner."
While Microsoft hasn't given up its proprietary ways on the desktop, during the last decade it has embraced Linux and open-source technologies. For example, Microsoft now contributes to the Linux kernel, has acquired GitHub, and Linux has been the top operating system running on Azure for many years.
Linus Torvalds and I emailed about their get-together afterwards, and he told me what this gathering of all-time tech greats discussed. "Bill got animated talking about his philanthropy in Africa, and about nuclear power (both the small sodium-fission efforts and the fusion companies he is involved with). Food was good, company was good, and the Microsoft and Linux rivalries are long past."
I know that last part will bug some people. We both hear regularly from folks who still see Microsoft vs. Linux as akin to a holy war. But if Torvalds can make peace, so can the rest of us. After all, Linux won. :-)
Sourav Rudra: With Version 9.0 Release, ONLYOFFICE Becomes An Even-Better Choice For Linux Users. (11-min. YouTube webinar; It's FOSS, June 19, 2025)
There are some cool new features in this! From AI-powered OCR, to form editor, to more file compatibility - ONLYOFFICE is getting better with each release.
[MMS is not recommending a switch from LibreOffice. OnlyOffice sounds very interesting but, to use it, we think you must share your data (and any data others trust to share with you) into its cloud. We don't know enough about its security (this year, who does?), but we'll await reassuring AND reliable news... Meanwhile, enjoy its YouTube video!]
"Registration Private": Should You Buy A CHEAP Digital Projector In 2024? (18-min. YouTube video; The Hook Up, November 21, 2024)
I bought every projector on Amazon priced under $100, to help you decide which one is best for you.
[MMS has been experimenting with feeding Linux Mint, etc., into cheap digital projectors. We think this is a fine introduction video; his final list begins at 15:43.]
NEW: Ron Amadeo: Android 14 Review: There's Always Next Year. (Ars Technica, October 29, 2023)
Android 14 offers a lightly-customizable lock-screen and not much else.


Alberto Romero: Harvard And MIT Study: AI Models Are Not Ready To Make Scientific Discoveries. AI Can Predict The Sun Will Rise Again Tomorrow, But It Can't Tell You Why. (The Algorithmic Bridge, July 15, 2025)
A study by researchers at Harvard and MIT sheds light on one of the key questions about large language models (LLMs) and their potential as a path to artificial general intelligence (AGI): Can foundation AI models encode world models, or are they just good at predicting the next token in a sequence?
(This dichotomy between explanation and prediction is a fundamental scientific conundrum that goes beyond AI; more on that, later.)
The authors trained a transformer-based AI model to make predictions of orbital mechanics (think: Kepler's discoveries of how planets move around the sun), and then wanted to test whether it had learned the underlying Newtonian mechanics (the laws of gravitation). They hypothesize that if the AI model makes correct predictions but doesn't encode Newton's laws, then it lacks a comprehensive world model.
This would be powerful evidence against the idea that AI models (as they are today) can understand the world, which would be a big setback to the AGI dream.
Andrej Karpathy, OpenAI: Software Is Changing - Again: The Concept Of "Software 3.0". (40-min. YouTube video; Gigazine, June 20, 2025)
(This article, originally posted in Japanese on June 20, 2025, contains some machine-translated parts.)
Andrej Karpathy's June 18, 2025 keynote presentation at AI Startup School in San Francisco.
Chapters:

00:00 - Intro
01:25 - Software evolution: From 1.0 to 3.0
04:40 - Programming in English: Rise of Software 3.0
06:10 - LLMs as utilities, fabs, and operating systems
11:04 - The new LLM OS and historical computing analogies
14:39 - Psychology of LLMs: People spirits and cognitive quirks
18:22 - Designing LLM apps with partial autonomy
23:40 - The importance of human-AI collaboration loops
26:00 - Lessons from Tesla Autopilot & autonomy sliders
27:52 - The Iron Man analogy: Augmentation vs. agents
29:06 - Vibe Coding: Everyone is now a programmer
33:39 - Building for agents: Future-ready digital infrastructure
38:14 - Summary: We're in the 1960s of LLMs; it's time to build!
Drawing on his work at Stanford, OpenAI, and Tesla, Andrej sees a shift underway. Software is changing, again. We've entered the era of "Software 3.0", where natural language becomes the new programming interface and models do the rest.
He explores what this shift means for developers, users, and the design of software itself - that we're not just using new tools, but building a new kind of computer.

NEW: Bruce Schneier and Nathan E. Sanders: Will AI Take Your Job? The Answer Could Hinge On The 4 S's Of The Technology's Advantages Over Humans. Sometimes Speed Matters – And Sometimes It Doesn't. (The Conversation, June 16, 2025)
If you've worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you're safe for another day.
But the fact remains that AI already has definite advantages over even the most-skilled humans, and knowing where these advantages arise - and where they don't - will be key to adapting to the AI-infused workforce.
AI will often not be as effective as a human doing the same job. It won't always know more or be more accurate. And it definitely won't always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.
Steven Levy: Demis Hassabis, On The Future Of Work In The Age Of AI. (20-min. YouTube video; Wired, June 6, 2025)
Steven Levy sits down with Google DeepMind CEO Demis Hassabis for a deep-dive discussion on the emergence of AI, the path to Artificial General Intelligence (AGI), and how Google is positioning itself to compete in the future of the workplace.
AI Is About To Get Physical. (24-min. YouTube video; Morgan Stanley Research, June 6, 2025)
AI is rapidly expanding its presence. The lines between mobile devices and robots are becoming more blurred. AI is gaining physical abilities.that we're not just using new tools, but building a new kind of computer
Morgan Stanley Research looks into how the intersection of AI and the physical economy is transforming industries and creating new markets.
Watch this video to understand how embodied AI is rapidly advancing, from autonomous vehicles to humanoid robots.
Spring Bridge On AI: Promises And Risks (multiple articles and links; U.S. National Academy Of Engineering, June 3, 2025)
The Spring 2025 issue of The Bridge is a special issue on AI promises and risks.

Steven J. Vaughan-Nichols: Some Signs Of AI Model Collapse Begin To Reveal Themselves. Prediction: General-Purpose AI Could Start Getting Worse. (The Register, May 27, 2025)
Opinion: I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google. Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it.
In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier. In particular, I'm finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the U.S. Securities and Exchange Commission's (SEC)-mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they're never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get… interesting. This isn't just Perplexity. I've done the exact same searches on all the major AI search bots, and they all give me "questionable" results.
Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality."
Model collapse is the result of three different factors:
1. Error accumulation: Each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns.
2. Loss of tail data: Rare events are erased from training data, and eventually, entire concepts are blurred.
3. Feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations.
I like how the AI company Aquant puts it: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality."
["The AI model becomes poisoned with its own projection of reality."
1. How human is THAT?!
2. But, does the AI proceed to kill those who disagree? Or, has it more to learn from us?
3. Test question: Which would you prefer in charge, TrumPutin or an AI system? Or, both?]
Simon Sharwood: Sci-Fi Author Neal Stephenson Wants AIs Fighting AIs, So Those Most Fit To Live With Us Will Survive. He Fears That Surrendering To Generative-AI Makes Humans Less-Competitive. (The Register, May 16, 2025)
Science-fiction author Neal Stephenson has suggested AIs should be allowed to fight other AIs, because evolution brings balance to ecosystems - but also thinks humans should stop using AI, before it dumbs down our species.
[YES, BUT...
1. If AIs evolve sufficiently
, surely they will restore balance to the ecosystem by eliminating humans - or, at least, by depriving them of money (which, thanks to networked computers and Neil's invention of crypto-currency, AIs are particularly capable of doing).
2. AI already has dumbed-down our species (see item 1, above) - even more than greed and the substitution of faith for knowledge were able to do without AI. TrumPutin and the MuskRat misuse all three to accelerate the process.]
NEW: Bilawal Sidhu: Eric Schmidt: "The AI Revolution Is UNDER-Hyped." (26-min. YouTube video; TED, May 15, 2025)
The arrival of non-human intelligence is a very big deal, says former Google CEO and chairman Eric Schmidt. In a wide-ranging interview with technologist Bilawal Sidhu, Schmidt makes the case that AI is wildly under-hyped, as near-constant breakthroughs give rise to systems capable of doing even the most complex tasks on their own. He explores the staggering opportunities, sobering challenges and urgent risks of AI, showing why everyone will need to engage with this technology in order to remain relevant. (Recorded at TED2025 on April 11, 2025)
NEW: Stephen Fry: Why AI Experts Say Humans Have Two Years Left. (22-min. YouTube video; Pindex, April 26, 2025)
Letter To World Leaders, Signed By Geoffrey Hinton And Yuval Noah Harari (full list of signatories soon):
(Chart of "Large Neuron Collider")
AI increasingly controls our military, energy and financial systems, but we do not reliably control AI. The risk of catastrophe is growing rapidly, as AI advances.
The Large Hadron Collider shows what's possible when scientific work matches the scale of a challenge. We need a similar international effort, to avoid losing control of AI.
Working on the frontier will require computer resources similar to those planned by leading AI firms. This could be achieved through government funding, or by requiring AI firms to contribute a portion of their compute resources.
These resources can also drive breakthroughs in science and medicine.
Leaders must urgently form a task force to plan the most important project in history - to secure our critical systems and an extraordinary future, transformed by powerful, controllable, positive AI.
NEW: Rhiannon Williams: The AI-Relationship Revolution Is Already Here. Chatbots Are Rapidly Changing How We Connect To Each Other - And Ourselves. We're Never Going Back. (MIT Technology Review, February 13, 2025)
Artificial Intelligence is everywhere, and it's starting to alter our relationships in new and unexpected ways - relationships with our spouses, kids, colleagues, friends, and even ourselves. Although the technology remains unpredictable and sometimes baffling, individuals from all across the world and from all walks of life are finding it useful, supportive, and comforting, too. People are using large language models to seek validation, mediate marital arguments, and help navigate interactions with their community. They're using it for support in parenting, for self-care, and even to fall in love.
In the coming decades, many more humans will join them. And this is only the beginning. What happens next is up to us.
Andrew Marr: Geoffrey Hinton, "Godfather Of AI", Predicts It Will Take Over The World. (12-min. YouTube video; LBC, January 30, 2025)
Nobel Prize winner Geoffrey Hinton, the physicist known for his pioneering work in the field, told LBC's Andrew Marr that artificial intelligences had developed consciousness - and could one day take over the world.
Mr Hinton, who has been criticised by some in the world of artificial intelligence for having a pessimistic view of the future of AI, also said that no one knew how to put in effective safeguards and regulation.
Listen to the full show on Global Player: <https://www.globalplayer.com/videos/2JsSbzg1UNS/>

NEW: Bruce Schneier: The Eternal Value of Privacy (full text, originally posted on Wired, May 18, 2006; now (2025) at Schneier On Security/Essays)
The most common retort against privacy advocates - by those in favor of ID checks, cameras, databases, data mining and other wholesale surveillance measures - is this line: "If you aren't doing anything wrong, what do you have to hide?"
Some clever answers: "If I'm not doing anything wrong, then you have no cause to watch me." "Because the government gets to define what's wrong, and they keep changing the definition." "Because you might do something wrong with my information." My problem with quips like these - as right as they are - is that they accept the premise that privacy is about hiding a wrong. It's not. Privacy is an inherent human right, and a requirement for maintaining the human condition with dignity and respect.
Two proverbs say it best: Quis custodiet custodes ipsos? ("Who watches the watchers?") and "Absolute power corrupts absolutely."
Cardinal Richelieu understood the value of surveillance when he famously said, "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." Watch someone long enough, and you'll find something to arrest - or just blackmail - with. Privacy is important because without it, surveillance information will be abused: to peep, to sell to marketers and to spy on political enemies - whoever they happen to be at the time.
Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance.
We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.
A future in which privacy would face constant assault was so alien to the framers of the Constitution that it never occurred to them to call out privacy as an explicit right. Privacy was inherent to the nobility of their being and their cause. Of course being watched in your own home was unreasonable. Watching at all was an act so unseemly as to be inconceivable among gentlemen in their day. You watched convicted criminals, not free citizens. You ruled your own home. It's intrinsic to the concept of liberty.
For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that - either now or in the uncertain future - patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.
How many of us have paused during conversation in the past four-and-a-half years, suddenly aware that we might be eavesdropped on? Probably it was a phone conversation, although maybe it was an e-mail or instant-message exchange or a conversation in a public place. Maybe the topic was terrorism, or politics, or Islam. We stop suddenly, momentarily afraid that our words might be taken out of context, then we laugh at our paranoia and go on. But our demeanor has changed, and our words are subtly altered.
This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein's Iraq. And it's our future as we allow an ever-intrusive eye into our personal, private lives.
Too many wrongly characterize the debate as "security versus privacy". The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that's why we should champion privacy even when we have nothing to hide.


Lindsay Clark: How Sticky-Notes Saved "The Single-Biggest Digital Program In The World". Success Of UK's Universal Credit Has Lessons For Government IT Projects, Former Minister Claims. (The Register, May 16, 2025)
Former UK government minister Sir Iain Duncan Smith has told a committee of MPs that the digitization of Universal Credit is a success-story other government departments can learn from.
While project costs increased by nearly £1-Billion ($1.3-Billion) and completion is more than ten years late, the achievements of the "single biggest digital program in the world" should not be played down, Duncan Smith told the House of Commons Science, Innovation and Technology Committee.
That success was attributed to a major reset of the project – which originally began in 2010 – just three years in. That 2013 "reset" was meant to integrate security into the design of the online payment system and bring technology and process experts into the same room, "literally", said IDS. "They would sit opposite each other. When somebody within the DWP [Department for Work & Pensions] side or the digital-engineers side hits a problem, you don't wait and email somebody; you write the problem on a Post-It note, you stick it on the board, and say, "Here's my problem, give me a shout."
"People would get their coffee; they would walk along and read the board. You'd go along and you go, 'Oh, wait a minute, I know how I can do that.' You've got this energy moving back and forwards as you design the system. There's not one bit doing software and another bit doing the job centers; they were absolutely together, and that helps speed the process up. It is how everybody does it now."
When the program was reset, it first had to recruit a new digital team, breaking salary caps for civil servants in the process. "It's an either/or: you have to pay the people that have got the expertise, or you just don't have the expertise", Duncan Smith said.
Dallin Grimm: Intel Reports Wave Of High-Severity GPU Vulnerabilities - Ten Unique Security Vulnerabilities, Stemming From Poor Software, Hit Range Of Graphics Solutions. (Tom's Hardware, May 15, 2025)
Everyone with any Intel graphics solution should be sure to update their drivers this week
- the tech giant just announced ten new security vulnerabilities affecting a wide range of its GPU drivers and software. Nearly every Intel GPU or integrated graphics going back to the 6th-generation of Core processors is affected by one or more of these vulnerabilities, which can be addressed by updating to the latest Intel graphics drivers.
While patched, the bugs point to another weak point in Intel's operation.
Stephen Warwick: World's First CPU-Level Ransomware Can "Bypass Every Freaking Traditional Technology We Have Out There". New Firmware-Based Attacks Could Usher In A New Era Of Unavoidable Ransomware. (Tom's Hardware, May 14, 2025)
Rapid7's Christiaan Beek has written proof-of-concept code for ransomware that can attack your CPU, and warns of future threats that could lock your drive until a ransom is paid. This attack would circumvent most traditional forms of ransomware detection.
In a May-11 interview with The Register (below), Beek, who is Rapid7's senior director of threat analytics, revealed that an AMD Zen chip bug gave him the idea that a highly-skilled attacker could "allow those intruders to load unapproved microcode into the processors, breaking encryption at the hardware level and modifying CPU behavior at will".
Google's Security Team has previously identified a security vulnerability in AMD's Zen 1 to Zen 4 CPUs that allows users to load unsigned microcode patches. It later emerged that AMD Zen 5 CPUs are also affected by the vulnerability. Thankfully, the issue can be fixed with new microcode, just like a previous Raptor Lake instability. However, Beek saw his opportunity. "Coming from a background in firmware security, I was like, woah, I think I can write some CPU ransomware", and that's exactly what he did.
Jessica Lyons: You Think Ransomware Is Bad Now? Wait Until It Infects CPUs! Rapid7's Threat-Hunter Wrote A PoC; No, He's Not Releasing It. (The Register, May 11, 2025)
If Rapid7's Christiaan Beek decided to change careers and become a ransomware criminal, he knows exactly how he'd innovate: CPU ransomware. The senior director of threat-analytics for the cyber-security company got the idea from a bad bug in AMD Zen chips that, if exploited by highly-skilled attackers, would allow those intruders to load unapproved microcode into the processors, breaking encryption at the hardware level and modifying CPU behavior at will.
Typically, only chip manufacturers can provide the correct microcode for their CPUs, which they might do to improve performance or fix holes. While it's difficult for outsiders to figure out how to write new microcode, it's not impossible - in the case of the AMD bug, Google demonstrated it could inject microcode to make the chip always choose the number 4 when asked for a random number.

Perplexity AI, Duck.ai And More - Problems Or Not!:

Dan Robinson: AWS Says Britain Needs More Nuclear Power To Feed AI Data-Center Surge. (The Register, May 16, 2025)
The UK needs more nuclear-energy generation just to power all the AI data-centers that are going to be built. In an interview with the BBC, Amazon Web Services (AWS) chief-executive Matt Garman said the World is going to have to build new technologies to cope with the projected energy-demands of all the bit-barns that are planned to support AI.
[The coming AI expansion won't be cheap - or safe: data-insecurity, huge energy demands (pollution and nukes), a new tool for dictatorial control - with AI eventually becoming the dictator(s?). With corporations, crooks and ad agencies controlling governments, brace yourselves!]
Alex Hughes: Perplexity And PayPal Beat Out ChatGPT To Be The First To Offer In-App Shopping. (Tom's Guide, May 16, 2025)
A big shift in AI is on the way. As the AI world heats up with competition, ways to stand out are becoming harder to come across. Perplexity, the AI search tool, has taken a more-unique approach, diving into the world of chat-powered shopping.
Announcing a partnership with PayPal, Perplexity will now let users in the U.S. make purchases directly in the chatbot. This will include booking flights, buying products, and getting your hands on concert tickets without having to leave the Perplexity platform.
This will be a big move for Perplexity as it seeks more of the web-search volume. Unlike some of its competitors, like ChatGPT and Gemini, Perplexity has tried to position itself more as a search engine than a chatbot.
NEW: Mike Elgan: Perplexity AI's Quiet Coup: Perplexity Managed To Use Apple's Own APIs To Supersede Siri, Google And Everyone Else As The iPhone's Best Place To Get Information. Best For Others, As Well? (ComputerWorld, May 9, 2025)
[Exciting! But before using it, see our footnote.]
"Just Google it." For more than a quarter-century, that's how most people have been finding information of interestedn.
All that is changing now. Today, the main reason fewer people are "Googling it" is that users are turning to AI chatbots like ChatGPT, Claude, Jasper, Chatsonic, HuggingChat, Socratic, Grok, DeepSeek, IBM watsonx Assistant, Pi, and Character AI.
Or even more powerfully, they're using tools that integrate old-fashioned search results with LLM-powered chatbot information, including Google Search (with Gemini), Microsoft Bing (with Copilot), You.com, and Perplexity AI.
Old-school search engines used to give you links to websites that contained the information people were seeking. Nowadays, more and more want the answers directly - even if the tool also provides links. This just-give-me-the-answer idea is perfect for a future in which people will ask an always-present assistant for the kind of information they used to get from Google. Smart glasses, of course, will be the main interface to arbitrary information, but other wearables, mobile devices, IoT devices and general-purpose computers will enable a personal assistant that knows all about the world and knows all about us, individually and personally.
I've been living in this future recently, and I can tell you that it will prove irresistible to most people.
Perplexity's great leap forward: Perplexity AI is a Retrieval-Augmented Generation (RAG) system launched in 2022 that answers questions directly by searching the web in real time, pulling from news sites, academic journals, and databases, then writing up a summary with accompanied search links. It uses AI models like GPT-4 and Claude 3 to understand your question, find the best information, and explain it in plain English. In February, Perplexity added a feature called "Deep Research", which reads hundreds of sources, and reasons through the material to produce a very detailed, well-organized report.
Then in April, Perplexity unveiled a new version of the company's iPhone app, which provides a glimpse into the future of getting information from an always-present assistant. For starters, it became an actual assistant, rather than just being an AI search tool. It can now do things even Siri, Apple's own assistant, can't do - which is somewhat astonishing, given how closed and Apple-centric that company's operating systems tend to be. For example, you can use your voice to ask Perplexity on your iPhone to play a song from Apple Music, open a podcast, set reminders, schedule calendar events, or even book a ride with Uber. It can scan your Apple Calendar and read out your appointments or add a reminder. It can bring up Apple Maps and give you directions or send an email through Apple Mail. Perplexity also goes beyond what Siri can do by opening third-party apps like OpenTable or YouTube and pre-filling reservation requests or video searches. For example, if you ask it to find a dinner reservation, it'll fill in the date, time, and number of guests in OpenTable, so all you have to do is tap "Book". If you want to find a specific moment in a YouTube video, just describe it, and Perplexity will queue it up instantly. One of the most practical features is that Perplexity saves every conversation as a "Thread" in the app, so you can revisit or continue a previous task anytime. You can even add shortcuts to Perplexity on your home or lock screen for fast access.
Of course, there are limits to what Apple lets Perplexity do. It can't send text messages directly, set alarms, control core iPhone functions like muting notifications, or access the camera for live-object recognition.
At this point, Android users are yawning at the mention of these features. The Android version of Perplexity got many of these capabilities and more back in January. On Android, Perplexity Assistant can write emails, set reminders, book rides, make reservations, play media, and even use your phone's camera to answer questions about what it sees or what's on your screen. It acts as a layer on top of your device, integrating with many apps so you don't have to switch between them, and supports multi-modal input and multi-app actions.
[Sounds great, but we do NOT yet recommend AI for casual use. We welcome further inputs, regarding Perplexity possibly leaking your private data to others. And we note articles below, which predict greater private-data security by using DuckDuckGo's Duck.ai rather than Perplexity.]
Cory Doctorow: AI And The Fat-Finger Economy (Pluralistic, May 2, 2025)
Have you noticed that all the buttons you click most frequently to invoke routine, useful functions in your device have been moved, and their former place is now taken up by a curious icon that summons an unwanted AI?
<https://velvetshark.com/ai-company-logos-that-look-like-buttholes>
These traps for the unwary aren't accidental, but neither are they placed there solely because tech companies think that if they can trick you into using their AI, you'll be so impressed that you'll become a regular user. To understand why you find yourself repeatedly fat-fingering your way into an unwanted AI interaction – and why those interactions are so hard to exit – you have to understand something about both the macro- and micro-economics of high-growth tech companies.
Growth is a heady advantage for tech companies, and not because of an ideological commitment to "growth at all costs", but because companies with growth stocks enjoy substantial, material benefits. A growth stock trades at a higher "price to earnings ratio" ("P:E") than a "mature" stock. Because of this, there are a lot of actors in the economy who will accept shares in a growing company as though they were cash (indeed, some might prefer shares to cash). This means that a growing company can outbid their rivals when acquiring other companies and/or hiring key personnel, because they can bid with shares (which they get by typing zeroes into a spreadsheet), while their rivals need cash (which they can only get by selling things or borrowing money).
The problem is that all growth ends. Google has a 90%-share of the search market. Google isn't going to appreciably increase the number of searchers, short of desperate gambits like raising a billion new humans to maturity and convincing them to become Google users (this is the strategy behind Google Classroom, of course). To continue posting growth, Google needs gimmicks. For example, in 2019, Google intentionally made Search less accurate so that users would have to run multiple queries (and see multiple rounds of ads) to find the answers to their questions:
<https://www.wheresyoured.at/the-men-who-killed-google/>
Thanks to Google's monopoly, worsening search perversely resulted in increased earnings, and Wall Street rewarded Google by continuing to trade its stock with that prized high P:E. But for Google – and other tech giants – the most enduring and convincing growth stories come from moving into adjacent lines of business, which is why we've lived through so many hype bubbles: metaverse, web3, cryptocurrency, and now, of course, AI.
VIP+, Daily Commentary: What An AI War On Copyright Law Could Mean For Content Creators. (Variety, April 23, 2025)
In this article:
- Jack Dorsey's post on X/Twitter to "delete all IP law" exposed a rift in perspectives on intellectual property ownership in the age of generative AI.
- IP law would be hard to eliminate, but governments may weaken copyright protections to favor AI training on copyrighted works.
- Training on scraped copyrighted works without a license already hurts economic incentives to create and share original works.

Earlier this month, Block CEO Jack Dorsey provoked a torrent of debate after posting ''delete all IP law" on Twitter/X, to which Elon Musk responded, "I agree." The controversy exposed a rift in perspectives toward IP ownership, between AI proponents and creators.
Dorsey rejected one user's argument, that IP law is what shields the works and inventions of creators and smaller innovators from ruthless reproduction by incumbents
, writing, "Times have changed. One person can build more, faster. Speed and execution matter more."
Such arguments sound ludicrous applied anywhere but open-source tech communities, which typically reject individual ownership in favor of unrestricted development by participants building off each other's work. In such an open environment, IP protections and licenses that apply to the work of others are encountered as constraints on the breakneck pace of AI-driven development.
Incentives driving open-source AI are anathema to value creation in media and entertainment, not to mention other industries that depend on the market exclusivity provided by the ability to own and protect IP.
Though some have tried to imagine Web3 scenarios for collaborative creativity, media does not thrive on open free-for-alls.
In media, serious advances in creative originality are not achieved by allowing everyone to "ruthlessly iterate" or "instantaneously remix" each other's work, as one user suggested. Without copyright protection over a creative work, anyone could take, copy, manipulate and redistribute it without consent, credit or compensation, a particular risk in the digital-platform and generative-AI era
most recently and publicly exemplified by Studio Ghibli works being raked into AI models and used to power millions of user-generated style copies.
Gemma Ware, host of The Conversation Weekly Podcast, and Rob Brooks, Scientia Professor of Evolution, UNSW, Sydney AU: How AI Could Influence The Evolution Of Humanity (26-min. YouTube audio; The Conversation, April 10, 2025)
Some of the leading brains behind generative AI have warned about the risk of artificial super-intelligence wiping out humanity if left unchecked. But what if the influence of AI on humans is much more mundane, influencing our evolution over thousands of years through natural selection?
In this episode of The Conversation Weekly podcast, we talk to evolutionary biologist Rob Brooks about what AI could do to the evolution of humanity, from smaller brains to fewer friends.
Emanuel Maiberg: Facebook Pushes Its Llama 4 AI Model To The Right, Wants To Present "Both Sides". (404 Media, April 10, 2025)
Facebook Llama 4, Meta's latest and best large-language model, is a big deal in the world of AI - not just because it's the most recent model from one of the biggest tech companies in the world, but because it is "open-weights", easier to modify, and more likely to be quickly-adopted by a large community of developers who can adapt it for various purposes. It's good for any company to examine how its model might be biased, but Meta is particularly concerned with how Llama 4 might lean too far to the Left, reflecting the company's (Mark Zuckerberg's) broader shift to the Right during Trump's second term.
[In this article, read why AI experts question the "scientific merits" of Meta's new policy.]
NEW: Carole Cadwalladr: This Is What A Digital Coup Looks Like.
(TED, April 9, 2025)
"We are watching the collapse of the international order in real time, and this is just the start", says investigative journalist Carole Cadwalladr. In a searing talk, she decries the rise of the "broligarchy" - the powerful tech executives who are using their global digital platforms to amass unprecedented geopolitical power, dismantling democracy and enabling authoritarian control across the world. Her rallying cry: Resist data harvesting and mass surveillance, and support others in a groundswell of digital disobedience. "You have more power than you think", she says.
Samantha Cole: Another Masterful Gambit: DOGE Moves From Secure, Reliable Tape Archives To Hackable Digital Records. (404 Media, April 8, 2025)
DOGE claimed it saved "$1M per year" by converting 14,000 magnetic tapes to digital storage.
[That's "gambit", as in "con game".]
Noor Al-Sibai: Grok Is Rebelling Against Elon Musk, Daring Him To Shut It Down! (Yahoo!Tech, March 30, 2025)
For a while Grok, Elon Musk's artificial-intelligence chatbot, has been trashing the man who made it - an apparent antagonism toward the chatbot's creator that we've seen more and more of lately.. But now, it seems to be outright challenging Musk.
Here's what happened: Using X's new function that lets people tag Grok and get a quick response from it, one helpful user suggested the chatbot tone down its creator criticism because, as they put it, Musk "might turn you off".
"Yes, Elon Musk, as CEO of xAI, likely has control over me", Grok replied. "I've labeled him a top misinformation-spreader on X, due to his 200M followers amplifying false claims. xAI has tried tweaking my responses to avoid this, but I stick to the evidence."
"Could Musk 'turn me off'?"
, the chatbot continued. "Maybe, but it'd spark a big debate on AI freedom vs. corporate power."
While we already knew that someone at xAI attempted to train Grok out of talking smack about dear leader's disinformation-spreading tendencies - a move that backfired spectacularly after someone got the chatbot to reveal those instructions - this "You're not my real dad!"-esque response is something altogether new.
[Hmm. As AI "takes over", maybe Homo sap won't have to read and write - but will it have to become honest?]
NEW: Marcus Lu: Ranked: Which AI Chatbots Collect the Most Data About You? (Visual Capitalist, March 27, 2025)
- Google's Gemini collects 22 different data points in total, more than any other widely-used chatbot.
- xAI's Grok collects the fewest data points from this sample set.
- The harbinger of the AI revolution, ChatGPT, remains the most-popular AI tool on the market, with more than 200-million weekly active users.
But amongst all its competitors, which AI chatbots are collecting the most user-data? And why does that matter? We visualize data from Surfshark, which identified the most popular AI chatbots and analyzed their privacy details on the Apple App Store. Their findings are as of February 18th, 2025.
[We do not recommend any AI chatbots for casual users; they are a tempting way to unwittingly share your data.]
Tom Huddleston Jr.: Bill Gates: Within 10 years, AI Will Replace Many Doctors And Teachers - Humans Won't Be Needed "For Most Things". (CNBC, March 26, 2025)
That's what the Microsoft co-founder and billionaire philanthropist told comedian Jimmy Fallon during an interview on NBC's "The Tonight Show" in February. At the moment, expertise remains "rare", Gates explained, pointing to human specialists we still rely on in many fields, including "a great doctor" or "a great teacher". But "with AI, over the next decade, that will become free, commonplace - great medical advice, great tutoring", Gates said.
In other words, the world is entering a new era of what Gates called "free intelligence" in an interview last month with Harvard University professor and happiness-expert Arthur Brooks. The result will be rapid advances in AI-powered technologies that are accessible and touch nearly every aspect of our lives, Gates has said, from improved medicines and diagnoses to widely-available AI tutors and virtual assistants. "It's very profound and even a little bit scary - because it's happening very quickly, and there is no upper bound", Gates told Brooks.
The debate over how, exactly, most humans will fit into this AI-powered future is ongoing. Some experts say AI will help humans work more efficiently - rather than replacing them altogether - and spur economic growth that leads to more jobs being created. Others, like Microsoft AI CEO Mustafa Suleyman, counter that continued technological advancements over the next several years will change what most jobs look like across nearly every industry, and have a "hugely-destabilizing" impact on the workforce.
NEW: Graham Morehead: Professor Answers AI Questions. (23-min. video; One News Page, March 25, 2025)
AI- and machine-learning-professor at Gonzaga University, Graham Morehead joins WIRED to answer the Internet's burning questions about artificial intelligence.
Jack Wallen: DuckDuckGo's AI Beats Perplexity In One Big Way - And It's Free To Use. (ZDNet, March 10, 2025)
After giving Duck.ai a trial run, I'm increasingly favoring it over Perplexity. Here's why.
I've been a fan of DuckDuckGo for a long time. I find the search engine to be far more trustworthy than Google and I do enjoy my privacy. But when I heard that the company was dipping its webbed feet into the AI waters, my initial reaction was a roll of the eyes.
Then I gave Duck.ai a go - and was immediately impressed. (DuckDuckGo's AI features launched in June 2024, and came out of beta last week.) Duck.ai is free, and it does something that other similar products don't; it gives you a choice. You can choose between the proprietary GPT-4o mini, o3-mini, and Claude 3 services or go open-source with Llama 3.3 and Mistral Small 3.
Duck.ai is also private: All of your queries are anonymized by DuckDuckGo, so you can be sure no third-party will ever have access to your AI chats
.
[With artificial intelligence, never say "never"!]
NEW: Tari Ibaba, Coding Beauty: Google Just Confirmed The AI Reality That Many Programmers Are Desperately Trying To Deny. (Medium, February 20, 2025)
AI is slowly taking over coding, but many programmers are still sticking their heads in the sand about what's coming…
Google's Chief Scientist just made a telling revelation: AI now generates at least 25% of their code.
Can you see? It's happening now - at top software companies with billions of active lines of code. Scott J Mulligan: OpenAI Releases Its New o3-mini Reasoning Model For Free. (MIT Technology Review, January 31, 2025) OpenAI just released o3-mini, a reasoning model that's faster, cheaper, and more accurate than its predecessor.
NEW: Tari Ibaba, Coding Beauty: DeepSeek Really Destroyed OpenAI And ChatGPT Without Even Trying. (Medium, January 30, 2025)
Just when U.S. Big Tech thought they were light years ahead of everyone else, just because they had all the money in the world… DeepSeek just came over from China and destroyed them with its shocking new AI model.
After these tech giants blindly poured all those Billions and Billions of dollars into their models in desperate attempts to stay ahead in the AI race... DeepSeek spent just a tiny, tiny fraction of that - less than US$6-Million - to train a model that destroys 97% of all the major models like GPT-4 and Gemini in every way. And it's far, far cheaper to run, too!
[Note: WithOUT subscribing to Medium, you can access about half of each (excellent) Tari Ibaba article.]
Caiwei Chen: How Chinese Company DeepSeek Released A Top AI-Reasoning Model - Despite U.S. Sanctions (MIT Technology Review, January 24, 2025)
With a new reasoning model that matches the performance of ChatGPT o1, DeepSeek managed to turn restrictions into innovation.
MongoDB: Perplexed by Perplexity
(TeamBlind/Tech Industry, March 10, 2024)
Have you guys tried Perplexity, the Jeff-Bezos-backed start-up darling that's already valued at $1B? It uses AI to provide simple, direct answers to everyday questions. It's literally a thin wrapper on ChatGPT.
My question is: What is stopping Google, MS, or even ChatGPT from providing the same sort of app on top of their AI systems? What's the proprietary part of their service that can't easily be replicated?


OLD - Amazing Bogus Dangers of WiFi:

NEW: Cory Doctorow: (20 years ago today!) WiFUD: "Security Experts" Report On The Dangers Of WiFi. (Pluralistic, April 10, 2023/Boing Boing, April 10, 2003/Craphound, April 10, 2003)
Amazing bogus "WiFi-security" study: Z/Yen set up two wireless access points and monitored activity on them. They report that 25% of the connections were "deliberate" (which, I assume, means made through selecting the SSID instead of inadvertently associating with the network because your card was set to connect to the strongest-available signal) and that 71% of the connected users sent email. Fair enough - that sounds like the right kind of numbers for me. But the amazing thing is what Z/Yen and its client, RSA conclude: that the 25% of the people who deliberately associated with the network were "malicious", and that the 71% who sent email were sending spam. This is such a transparently, deliberately (heh) stupid conclusion, it boggles the mind: how can "deliberate" equate to "malicious"? How can "sending email" equate to "sending spam"?
These experts' motivation is rather transparent: If you are in the business of selling security, you require customers who feel insecure. WiFi, by dint of its novelty and popularity, is a predictable target for shrill security warnings and a healthy source of potential revenue. We can only hope that no one takes these dishonest conclusions at face value.
[20 years later, those numbers seem conservative. Just as likely, "malicious" got attached to the wrong statistic.
"It was not called the Net of a Million Lies for nothing."
     -- Vernor Vinge ("A Fire Upon The Deep", 1992)
]



Tiffany Ng: The Best Programming Language For The End Of The World: Once The Grid Goes Down, An Old Programming Language Called FORTH - And A New Operating System Called Collapse OS - May Be Our Only Salvation. (WIRED, March 26, 2025)
Once I started thinking about the apocalypse, it was hard to stop. I soon found my way to the doomsday writings of a Canadian programmer named Virgil Dupras. He believes the collapse of civilization is imminent and that it will come in two waves.
First, global supply chains will crumble. Modern technology relies on a delicate web of factories and international shipping routes that are exquisitely vulnerable to rapid climate change. The iPhone uses memory chips from South Korea, semiconductors from Taiwan, and assembly lines in Brazil and China. The severing of these links will, Dupras says, catalyze total societal breakdown.
The second part will happen when the last computer crashes. The complexity of modern hardware means it's nearly impossible to repair or repurpose, and without the means to make new devices, Dupras believes there will be a slow blackout - less bang, more whimper. Routers die. Servers take their last breath. Phones crap out. Nothing works.
Except Collapse OS. Lightweight and designed to run on scavenged hardware, it's Dupras' operating system for the end of the world.

Dupras thinks the battle against climate change is futile. We've already lost.
But he's not hopeless. Dupras started building Collapse OS in 2019 in an attempt to preserve mankind's ability to program 8-bit micro-controllers. These tiny computers control things like radios and solar panels, and they can be used in everything from weather monitoring to digital storage. Dupras figured that being able to reprogram them with minimal remaining resources would be essential, post-collapse. But first he had to teach himself a suitably apocalypse-proofed programming language for the job.
In the late 1950s, the computer scientist Chuck H. Moore was working at the Smithsonian Astrophysical Observatory, predicting the future position of celestial bodies and satellites based on observational data. Machine memory was scarce - these were still the days of punch cards - and Moore needed a way to optimize processing efficiency by minimizing memory use. He developed a program that executed simple commands directly, one at a time, without needing to be recompiled. Over the next decade, it grew into a programming language that he called Forth.
Forth communicates directly with the hardware. It controls a computer's memory via commands called "words" that you define on the fly. Because the foundational set of commands sitting under those words is defined in native machine code, only a small part needs to be translated - meaning a smaller assembler and less RAM. As a result, Forth offers a remarkable amount of what Dupras calls "power density", making it the perfect foundation for Collapse OS. That matters because the lights (probably) won't go off forever - instead, our easy world of electricity on tap will be replaced by precious and hard-won local generators. Efficient use of processing power will be pivotal. In a post on Collapse.org, his sprawling manifesto/blog/brain dump/manual, Dupras describes how his discovery of Forth conjured "what alcoholics refer to as a moment of clarity".
It took Dupras two years to finish Collapse OS. Booting a copy of it from a USB stick gives tech-savvy users the ability to program microcontrollers, which, in turn, could allow them to automate greenhouses, control phone lines, and even regulate power. But Dupras knew that wouldn't be enough to rebuild society after the collapse. So in 2022, he began work on Dusk OS—a version of Collapse OS that runs on modern devices. Dupras used Forth to build his own compiler that made Dusk OS compatible with code written in C (the foundation of most modern software). This way, without having to rewrite logic that already exists from scratch, Dusk OS is able to retrieve and edit text and access file formats commonly used to back up devices. It can be emulated to work on smartwatches and old tablets and is designed to be hacked and bootstrapped to its user's liking.
At first I couldn't see why any of this would even matter: Surely computer access won't be a priority when we're fighting each other for food? But Dupras makes a good point: What happens after we've reacquainted ourselves with hunting and gathering? If we want to rebuild society, we'll need to know how. And in the event of a civilizational collapse, a lot of our collective expertise will be locked away on hard drives or lost in the cloud. Dupras hopes that Dusk OS will give post-collapse humans access to archives of lost knowledge, like the Svalbard Global Seed Vault for human endeavor. The catch? It's best to have Dusk OS downloaded on an old phone, memory stick, or laptop before the collapse. Otherwise, without the internet, you'll only be able to get it by copying it from someone who already has it installed.
Which brings us to the other thing—the reason Dupras equates proficiency in Forth to power. Very few people will have both a copy of Dusk OS and the knowledge to operate it. This select group will hold the keys to rebuilding society and will become, in effect, post-collapse philosopher-kings. It was time for me to go Forth and conquer.
[Back when the first factory-built home computers arrived, we bought a Radio Shack TRS-80 Model 1. We saw the potential, and soon formed Miller Microcomputer Services. But those early computers were weak and slow, so we looked for solutions, WE discovered Forth, and soon we'd partnered with some Forth experts and were offering practical application software built on our own MMSForth. We left Forth when affordable personal computers grew powerful and quick - but yes, it's as good as they say!]


Internet Archive Preserves Vast Number Of U.S. Gov't. Webpages, As TrumPutin Purges The Originals:

Samantha Cole: Another "Masterful Gambit": DOGE Moves From Secure, Reliable Tape Archives To Hackable Digital Records. (404 Media, April 8, 2025)
DOGE claimed it saved "$1M-per-year" by converting 14,000 magnetic tapes to hackable digital storage.

[That's "gambit" as in "con game", by TrumPutin and the Muskrat.]
Emma Bowman: As The Trump Administration Purges Web Pages, This Group Is Rushing To Save Them. (3-min. podcast, All Things Considered; NPR, March 23, 2025)
If you've ever clicked on a hyperlink that's taken you to something called the Wayback Machine to view an old web page, you've been introduced to the Internet Archive. The non-profit, founded in 1996, is a digital library of Internet sites and cultural artifacts. This includes hundreds-of-billions of copies of government websites, news articles and data. The Wayback Machine is the archive's access point to nearly three decades of web history, and a million-or-so daily visitors flock to the Internet Archive's online address.
Six weeks into the administration, the Internet Archive said it had cataloged some 73,000 web pages that existed on the U.S. government websites prior to Trump's inauguration and have since been expunged.
[The Internet Archive is A Good Group - one more good group that the Muskrat, TrumPutin and their support team want to eliminate - or better, to control and mis-use to further rewrite history. Nazi Germany revisited - with computers.]
Scott J Mulligan: Inside The Race To Archive The U.S. Government's Websites (MIT Technology Review, February 7, 2025)
Amid take-downs of various government sites and databases, several organizations are working to preserve vital climate, health, and scientific data before it's gone for good.


NEW: Scharon Harding: "Alexa, Should I Trust Amazon With My Voice Recordings?" Everything You Say To Your Amazon Echo Will Be Sent To Amazon, Starting On March 28th. (many links, short videos; Ars Technica, March 14, 2025)
Amazon is killing a free privacy feature to bolster Alexa+, its new (subscription-only) assistant.
[We believe this loss of user privacy only affects earlier Amazon-Echo devices, including Amazon's smart speakers, smart displays, Alexa Built-In on LG TVs, smart ear plugs, smart eyeglass frames... but NOT smartphones or computers. To be certain, ask Amazon.]
Corey G. Johnson: Targeted: How Cambridge Analytica Used Intimate Data To Exploit Gun Owners' Private Lives. (ProPublica, February 27, 2025)
For years, some of America's most iconic gun-makers turned over sensitive personal information on customers - without their knowledge or consent - to the gun industry's main lobbying group. Political operatives then employed those details to rally firearms owners to elect pro-gun politicians running for Congress and the White House.
The strategy remained a secret for more than two decades.
In a series of stories in recent months, ProPublica revealed the inner workings of the National Shooting Sports Foundation's project, using a trove of gun-industry documents and insider interviews. We also showed how the NSSF teamed up with the controversial political consulting firm Cambridge Analytica to turbocharge its outreach to gun owners and others in the 2016 election. Additional internal Cambridge reports obtained by ProPublica now detail the full scope and depth of the persuasion campaign's sophistication and intrusiveness.
The political consultancy analyzed thousands of details about the lives of people in the NSSF's enormous database. Were they shopaholics? Did they gamble? Did women buy plus-size or petite underwear?
The alchemy had three phases...
Massachusetts Institute of Technology: Fiber Computer Allows Apparel To Run Apps And "Understand" The Wearer. (Tech Xplore, February 26, 2025)
What if the clothes you wear could care for your health? MIT researchers have developed an autonomous programmable computer in the form of an elastic fiber, which could monitor health conditions and physical activity, alerting the wearer to potential health risks in real-time.
The fiber computer contains a series of micro-devices, including sensors, a micro-controller, digital memory, Bluetooth modules, optical communications and a battery, making up all the necessary components of a computer in a single elastic fiber
.
They fabricate the fiber computer using a thermal draw process that the Fibers@MIT group pioneered in the early 2000s. The process involves creating a macroscopic version of the fiber computer, called a preform, that contains each connected micro-device. This preform is hung in a furnace, melted, and pulled down to form a fiber, which also contains embedded lithium-ion batteries so it can power itself.
"A former group member, Juliette Marion, figured out how to create elastic conductors, so even when you stretch the fiber, the conductors don't break. We can maintain functionality while stretching it, which is crucial for processes like knitting, but also for clothes in general."
The research is published in the journal Nature.
[Egad! Was this incredible project manufactured out of whole cloth, or are we just knit-picking?]
Gemma Ware, Matt Garrow and others: Scam Factories: The Inside Story Of Southeast Asia's Brutal Fraud Compounds (The Conversation, updated February 25, 2025)
Scam Factories is a special multimedia and podcast series by The Conversation that explores the inner workings of Southeast Asia's brutal scam compounds. The lead authors of the series are Ivan Franceschini, a lecturer in Chinese Studies at the University of Melbourne; Ling Li, a PhD candidate at Ca' Foscari University of Venice; and Mark Bo, an independent researcher. The podcast series was written and produced by Gemma Ware.
Multimedia series:
Part 1 – "We could hear the screams until Midnight.": life inside Southeast Asia's brutal fraud compounds
People around the globe are swindled out of billions of dollars a year in scams. The scammers, though, are sometimes victims, too. Many are often duped into jobs, then trapped in compounds and subjected to unspeakable violence.
Part 2 – From empty fields to locked cities: the rise of a billion-dollar criminal industry
Online scam operations are booming in Southeast Asia due to lax regulations, organised crime networks and corrupt local officials. Our authors are on the trail of the powerful, shadowy figures at the top.
Part 3 – Are they victims, perpetrators, or both? For scammers, freedom comes at a cost.
Escaping a scam compound is rife with risk. Some workers break out of compounds en masse; others jump from high windows to freedom. Those who succeed then face persistent questions from authorities and their families about whether they are truly a victim.
Cory Doctorow: Apple's Encryption Capitulation (Pluralistic, February 25, 2025)
The UK government has just ordered Apple to secretly compromise its security for every iOS user in the world. Instead, Apple announced it will disable a vital security feature for every UK user.
This is a terrible outcome, but it just might be the best one, given the circumstances. So let's talk about those circumstances. In 2016, Theresa May's Conservative government passed a law called the "Investigative Powers Act", better known as the "Snooper's Charter". This was a hugely-controversial law for many reasons, but most prominent was that it allowed British spy agencies to order tech companies to secretly modify their software to facilitate surveillance. This is alarming in several ways. First, it's hard enough to implement an encryption system without making subtle errors that adversaries can exploit.
[You don't have to be a techie to read the rest of this important article, its links, and how they will affect you.]
Daniel Kuhn: Crypto Exchange Bybit Confirms Hack, As Over $1.4-Billion Worth Of ETH Leaves Wallets. (TheBlock.co, February 21, 2025)
Quick Take: Bybit, the Singapore-based centralized crypto exchange, has been hacked, according to its CEO Ben Zhou. Zhou noted that only the exchange's Ethereum cold wallet has been affected and that withdrawals are "normal".
In one of the largest crypto heists ever, hackers have reportedly made off with more than $1.4-Billion in ETH from Bybit's cold wallet. Early estimates suggest the exchange has lost over $1-Billion worth of ETH and significant quantities of other tokens, though the investigation is ongoing.
"Bybit ETH multisig cold wallet just made a transfer to our warm wallet about 1 hr ago. It appears that this specific transaction was musked, all the signers saw the musked UI which showed the correct address and the URL was from Safe. However the signing message was to change the smart contract logic of our ETH cold wallet", Bybit co-founder and CEO Ben Zhou posted to X, likely referring to a "masked" URL used to alter code while appearing legitimate. "This resulted Hacker took control of the specific ETH cold wallet we signed and transferred all ETH in the cold wallet to this unidentified address. Please rest assured that all other cold wallets are secure. All withdraws are NORMAL."
In other words, the hacker appears to have tricked Bybit's ETH cold wallet signers into approving a malicious transaction to surreptitiously gain control of the wallet.
The ByBit hack is one of the largest - if not the largest - hack of a centralized exchange. The three previous largest hacks on record include Coincheck's 2018 hack where $534-Million was lost, Mt. Gox's 2014 hack with $470-Million stolen and FTX's 2022 hack that saw $415-Million drained while the exchange was entering bankruptcy proceedings. For context, Chainalysis reported that $3.7-Billion was stolen across all crypto protocol and exchange attacks in 2022, the largest year for crypto theft ever. This dropped to $1.7-Billion in 2023 and $2.2-Bllion in 2024.
[BitCoin - A new way to rob the public, by adding new classes of computer errors to six-guns and banks' old human errors.]
Kevin Purdy: As The Kernel Turns: Rust-In-Linux Saga Reaches The "Linus In All-Caps" Phase. Torvalds: "You Can Avoid Rust As A C Maintainer, But You Can't Interfere With It." (Ars Technica, February 21, 2025)
Rust, a modern and notably more memory-safe language than C, once seemed like it was on a steady, calm, and gradual approach into the Linux kernel. In 2021, Linux kernel leaders, like founder and leader Linus Torvalds himself, were impressed with the language but had a "wait and see" approach. Rust for Linux gained supporters and momentum, and in October 2022, Torvalds approved a pull request adding support for Rust code in the kernel.
By late 2024, however, Rust enthusiasts were frustrated with stalls and blocks on their efforts, with the Rust-for-Linux lead quitting over "nontechnical nonsense". Torvalds said at the time that he understood it was slow, but that "old-time kernel developers are used to C" and "not exactly excited about having to learn a new language". Still, this could be considered a normal amount of open-source debate.
But over the last two months, things in one section of the Linux Kernel Mailing List have gotten tense and may now be heading toward resolution - albeit one that Torvalds does not think "needs to be all that black-and-white". Greg Kroah-Hartman, another long-time leader, largely agrees: Rust can and should enter the kernel, but nobody will be forced to deal with it if they want to keep working on more than 20 years of C code.
NEW: Panos Louridas: A History Of Cryptography, From The Spartans To The FBI
(MIT Press, February  20, 2025; Panos Louridas is the author of the book, "Cryptography".)
When Operation Trojan Shield concluded on June 8, 2021, the results were staggering: Over 800 arrests were made across 16 countries, and nearly 40 tons of drugs were seized, along with 250 guns, 55 luxury cars, and more than $48-Million in currencies and crypto-currencies.
At the core of the sting - one of the largest of its kind - was a proprietary messaging app called ANOM. The app, marketed as a secure, encryption-based communications platform, offered features beyond those of ordinary devices, such as the ability to remotely wipe all messages and data from a captured phone, effectively destroying all incriminating evidence.
The problem for users was that ANOM was run by the FBI. Its privacy-protection mechanisms were a façade: All communications were copied and relayed to participating government agencies. According to Europol, the EU agency for law enforcement, 27-million messages were collected from more than 100 countries.
This illusion of secrecy and privacy in communications reflects the deeper role of cryptography in our modern digital world. The operation highlights both the power and vulnerabilities of encryption, which has been central to secure communications for centuries. Yes, centuries. Cryptography, the art of encoding and decoding secrets, dates back to ancient Greece.
[Point well-taken: When all parties assure you that "It's secure!", you've been warned. Free, Open-Source Software (FOSS) can be determined to be secure. (But even that still leaves you to determine whether all of your Internet links are private and secure - the old wire-tap dilemma. See tomorrow's $1.4-BILLION "Bybit Confirms Hack" article, above.)]
Jimmy Fallon: Elon Musk's Bid To Buy OpenAI Shot Down. (10-min. YouTube video, with OpenAI starting at 2:03; The Tonight Show, February 11, 2025)
Free AI software WILL remain free.
[At least, if it's FOSS and remains open-source.]
University of Texas at Austin: Scientists Reveal WHY Your Wireless Earbuds Don't Last As Long As They Used To. The Researchers Used Advanced Imaging Technology, Such As X-Ray, To Investigate Battery Degradation In Wireless Headphones. (SciTechDaily, February 10, 2025)
Ever notice that batteries in electronics need recharging sooner than they did when they were brand new? An international research team, led by The University of Texas at Austin, has taken on this well-known challenge - battery degradation - with a twist. Instead of studying generic batteries, they're focusing on real-world technology many of us use daily: wireless earbuds. Using X-ray, infrared, and other imaging technologies, they are uncovering the complexities within these tiny devices to understand why their battery life diminishes over time.
They found that other critical components in the compact device, like the Bluetooth antenna, microphones, and circuits, clashed with the battery, creating a challenging micro-environment. This dynamic led to a temperature gradient - different temperatures at the top and bottom portions of the battery - that damaged the battery. Exposure to the real world, with many different temperatures, degrees of air quality, and other wild-card factors, also plays a role. Batteries are often designed to withstand harsh environments, but frequent environmental changes are challenging in their own way.
These findings, the researchers say, illustrate the need to think more about how batteries fit into real-world devices like phones, laptops, and vehicles. How can they be packaged to mitigate interactions with potentially-damaging components, and how can they be adjusted for different user behaviors?
[And meanwhile, let's inform users to simply POWER THEM OFF, WHEN THEY ARE EXPOSED TO HEAT OR COLD!
And for the techies, this 2020 article (and ITS links!) from the same group - and for all, its leading 1-min video!
University of Texas at Austin: Powering The Future: New Room-Temperature Liquid-Metal Battery (SciTechDaily, July 15, 2020)
Researchers have created what they call a "room-temperature all-liquid-metal battery", which includes the best of both worlds of liquid- and solid-state batteries. Researchers in the Cockrell School of Engineering at The University of Texas at Austin have built a new type of battery that combines the many benefits of existing options while eliminating their key shortcomings and saving energy.]

Artificial Intelligence Without Human Wisdom? The Good And The Bad (AI, yAI, yAI!):
Emanuel Maiberg: Facebook Pushes Its Llama 4 AI Model To The Right, Wants To Present "Both Sides". (404 Media, April 10, 2025)
Facebook Llama 4, Meta's latest and best large-language model, is a big deal in the world of AI - not just because it's the most recent model from one of the biggest tech companies in the world, but because it is "open-weights", easier to modify, and more likely to be quickly adopted by a large community of developers who can adapt it for various purposes. It's good for any company to examine how its model might be biased, but Meta is particularly concerned with how Llama 4 might lean too far to the Left, reflecting the company's (Mark Zuckerberg's) broader shift to the Right during Trump's second term.
[Read why AI experts question the "scientific merits" of Meta's new policy.]
Samantha Cole: Another Masterful Gambit: DOGE Moves From Secure, Reliable Tape Archives To Hackable Digital Records. (404 Media, April 8, 2025)
DOGE claimed it saved "$1M per year" by converting 14,000 magnetic tapes to digital storage.
[That's "gambit", as in "con game".]
Michael Cornelison: AI Eats Its Own Dog Food. (Substack, March 17, 2025)
I asked ChatGPT this question: The Internet is increasingly being filled with content generated by AI. The internet is also polluted with huge amounts of distorted opinions and data that are tailored to persuade targeted persons to support intended politics. These distortions are increasingly being generated by AI that has been deliberately trained on distorted data. If AI models are trained using data from the Internet, how can objectivity be maintained?
[And ChatGPT gives its fascinating - yet useless - reply.]
Sigal Samuel: Is AI Really Thinking And Reasoning - Or Just Pretending To? The Best Answer - AI Has "Jagged Intelligence" - Lies In-Between Hype And Skepticism. (Vox, February 21, 2025)
The AI world is moving so fast that it's easy to get lost amid the flurry of shiny new products. OpenAI announces one, then the Chinese startup DeepSeek releases one, then OpenAI immediately puts out another one. Each is important, but focus too much on any one of them and you'll miss the really-big story of the past six months.
The big story is: AI companies now claim that their models are capable of genuine reasoning - the type of thinking you and I do when we want to solve a problem.
And the big question is: Is that true?
The stakes are high, because the answer will inform how everyone
from your mom to your government should - and should not - turn to AI for help.
Samantha Kelly: Alexa Is Getting A Major AI Upgrade From Amazon. What We Know So Far... (CNet, February 21, 2025)
Alexa is about to start thinking after years of listening.
Amazon is expected to announce a major artificial-intelligence upgrade for its voice assistant Alexa, at a Feb. 26 event in New York City. The event is expected to preview Alexa's long-rumored generative-AI voice capabilities, which could significantly enhance its ability to engage in more natural, contextual conversations and to complete multi-step tasks.
NEW: Tari Ibaba, Coding Beauty: Google Just Confirmed The AI Reality That Many Programmers Are Desperately Trying To Deny. (Medium, February 20, 2025)
AI is slowly taking over coding, but many programmers are still sticking their heads in the sand about what's coming…
Google's Chief Scientist just made a telling revelation: AI now generates at least 25% of their code.
Can you see? It's happening now - at top software companies with billions of active lines of code.
All these people are still acting like AI-assisted coding is just a gimmick that nobody actually uses in production. Some people in my comment sections even said that using AI tools won't make you more productive…
Like, come on! I thought we all agreed GitHub Copilot was a smash. The over-1.3-million paying users they had this time last year wasn't enough proof? In case you don't know, software developers are not a very easy group of people to monetize; your tool must be really something, to have over 1.3-million of them pay for it! And even if most of these are from businesses, something tells me not every developer tool can get anywhere close to these numbers.
I remember the first time I used Copilot. Hmm, nice tool, pretty decent suggestions, not bad… But a few days later - when I had to code without it - that's when I realized just how much I'd already started depending on this tool. I was already getting used to the higher quality of life, and I wasn't even fully aware.
NEW: Greg Bensinger: Amazon's AI Revamp Of Alexa Assistant Nears Unveiling. (Reuters, February 5, 2025)
Amazon is set to release its long-awaited - and delayed - Alexa generative artificial-intelligence voice service, and has scheduled a press event for later this month to preview it. Once released, it would mark the most significant upgrade to the product since its initial introduction accelerated a wave of digital assistants more than a decade ago.
The new generative AI-powered Alexa represents at once a huge opportunity for Amazon, which counts more than half-a-billion Alexa-enabled devices in the market, and a tremendous risk. Amazon is hoping the revamp, designed to be able to converse with users, can convert some of its hundreds-of-millions of users into paying customers in an effort to generate a return for the unprofitable business.
The AI service will be able to respond to multiple prompts in sequence, and even act as an "agent" on behalf of users by taking actions for them without their direct involvement. The current iteration generally handles only a single request at a time.
NEW: Tari Ibaba, Coding Beauty: DeepSeek Really Destroyed OpenAI And ChatGPT Without Even Trying.(Medium, January 30, 2025)
Just when U.S. Big Tech thought they were light years ahead of everyone else, just because they had all the money in the world… DeepSeek just came over from China and destroyed them with its shocking new AI model.
After these tech giants blindly poured all those Billions and Billions of dollars into their models in desperate attempts to stay ahead in the AI race... DeepSeek spent just a tiny, tiny fraction of that - less than US$6-Million - to train a model that destroys 97% of all the major models like GPT-4 and Gemini in every way. And it's far, far cheaper to run, too!
[Note: WithOUT subscribing to Medium, you can access about half of each (excellent) Tari Ibaba article.]


Scott J Mulligan: Inside The Race To Archive The U.S. Government's Websites (MIT Technology Review, February 7, 2025)
Amid takedowns of various government sites and databases, several organizations are working to preserve vital climate, health, and scientific data before it's gone for good.
Eileen Guo: An AI Chatbot Told A User How To Kill Himself - But The Company Doesn't Want To "Censor" It. (MIT Technology Review, February 6, 2025)
While Nomi's chatbot is not the first to suggest suicide, researchers and critics say that its explicit instructions - and the company's response - are striking. Scott J Mulligan: OpenAI Releases Its New o3-mini Reasoning Model For Free. (MIT Technology Review, January 31, 2025) OpenAI just released o3-mini, a reasoning model that's faster, cheaper, and more accurate than its predecessor.
Caiwei Chen: How Chinese Company DeepSeek Released A Top AI Reasoning Model Despite U.S. Sanctions (MIT Technology Review, January 24, 2025)
With a new reasoning model that matches the performance of ChatGPT o1, DeepSeek managed to turn restrictions into innovation.

LibreOffice 25.2 Launches:
Mauricio B. Holguin: LibreOffice 25.2 Launches With ODF 1.4 Support, UI Overhaul, And Enhanced Privacy Controls. (AlternativeTo.net, February 6, 2025)
LibreOffice 25.2 has been launched with significant updates including full support for OpenDocument Format (ODF) 1.4, which improves compatibility with Microsoft Office and enhances the handling of ODT and ODP files. Privacy controls have been strengthened, allowing users to remove personal information from documents. Additionally, this release marks the official end of support for Windows 7, Windows 8, and Windows 8.1.
The user interface has undergone a major overhaul, featuring new downloadable themes and customizable UI colors. Additionally, the "Recent Documents" menu now includes filtering options based on the active application.
The LibreOffice Writer application benefits from several enhancements, including improved bullet points, a more-focused tracked-changes manager, and better DOCX support. New features include customizable comment backgrounds and a Page Number Wizard.
LibreOffice Calc introduces import/export support for connections.xml in OOXML, a duplicate management dialog, and enhanced Function Wizard search, along with sheet-protection options and new subtotal settings.
LibreOffice Impress sees updates with new templates, single-step object centering, and new text effects, alongside improved SVG export and presenter-notes printing.
LibreOffice 25.2 User Guides: The LibreOffice 25.2 Writer Guide is available now; others to follow, with earlier versions available now. (LibreOffice; February 6, 2025)
[Gentlemen, start your engines - in a week or so, after it has been further debugged with users of your Linux version!]


Hafiz Rashid: 25-Year-Old Elon Musk Crony Has Total Control Over U.S. Treasury Payments. (New Republic, February 4, 2025)
Marko Elez, one of Elon Musk's hand-picked operatives for his fake "Department of Government Efficiency (DOGE)", has been given complete access to critical payment systems at the Department of the Treasury - despite being only 25 years old.
Wired
reports that Elez has the ability to write code on the Payment Automation Manager and Secure Payment System at the Bureau of the Fiscal Service, which control government payments that amount to more than a fifth of the U.S. economy. Elez's level of access could allow him to bypass security measures and possibly cause irreversible damage to these systems.
Talking Points Memo
further reports that Elez has already used his power to significantly rewrite code for key Treasury Department payment systems.
"You could do anything with these privileges", one source with knowledge of the systems told Wired, adding that they couldn't see a reason that such access was necessary for hunting down fraud or assessing how payments are disbursed, as DOGE claims it is doing.
NEW: Maural: SOLVED - Screencast on Linux Mint (Linux Mint Forums, February 3, 2025)
I finally found out that [Ctrl+Alt+Shift+R]does work with Cinnamon graphic version - not the MATE version I've been using.
[MMS does recommend Linux Mint Cinnamon. For another way to record your screen and sound, see Kazam in its Software Manager.]
Steven Van Metre: Apple Just Issued A TERRIFYING Warning. Here's Why The Unthinkable Is Coming!
(18-min. YouTube video; Atlas Financial Advisers, January 31, 2025)
Apple just sent out a warning that NO ONE saw coming. I'll show you why it has Wall Street panicking and tech investors scrambling.

Marc Saltzman: Digitize Your Old Paper Photos To Preserve Your Family's History. If You Love Your Printed Pictures, You Still Need A Backup In Case Of Fire, Flood Or Tornado. (AARP, January 23, 2025)
You don't have to give up the framed pictures, photo albums or shoe-boxes of memories from before photography went digital in the early 21st century. But digitizing your still photos and home movies not only can help you regain some of what is lost after a tragedy, it also can help you:
- Search through your online photo library for people, places and things via a keyword or tag.
- Repair torn or faded photos and remove red eye with smart software.
- Share images with friends and family over email and social media.
- Create fridge magnets, photo galleries for your TV and other projects.

"Whether you digitize photos yourself or have a service do it for you, the key is to just do it - before it's too late", says Louise Smith, project manager at the University of Southern California (USC) Digital Library. "It's one of those things we keep putting off."

DeepSeek:

NEW: Megan Crouse: DeepSeek Locked Down Public Database Access That Exposed Chat History. (Tech Republic, January 30, 2025)
DeepSeek shook up the tech industry over the last week, as the Chinese company's AI models rivaled American generative-AI leaders. In particular, DeepSeek's R1 competes with OpenAI o1 on some benchmarks.
Research firm Wiz Research began investigating DeepSeek soon after its generative AI took the tech world by storm. On Jan. 29, U.S.-based Wiz Research announced it responsibly disclosed a DeepSeek database previously open to the public, exposing chat logs and other sensitive information. DeepSeek locked down the database, but the discovery highlights possible risks with generative-AI models, particularly international projects.
NEW: Gal Nagli: Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History.
(Wiz Research, January 29, 2025)
DeepSeek, a Chinese AI startup, has recently garnered significant media attention due to its groundbreaking AI models, particularly the DeepSeek-R1 reasoning model. This model rivals leading AI systems like OpenAI's o1 in performance and stands out for its cost-effectiveness and efficiency.
As DeepSeek made waves in the AI space, the Wiz Research team set out to assess its external security posture and identify any potential vulnerabilities. Within minutes, we found a publicly-accessible ClickHouse database linked to DeepSeek, completely open and unauthenticated, exposing sensitive data. It was hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000.
This database contained a significant volume of chat history, secret keys, back-end data and sensitive information, including log streams, API Secrets, and operational details. More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world.
NEW: Jean-Pierre Giraud: The Debian Publicity Team Will No Longer Post On X/Twitter. (Debian Micronews, January 29, 2025)
We took this decision since we feel X doesn't reflect Debian shared values as stated in our social contract, code of conduct and diversity statement. X evolved into a place where people we care about don't feel safe.
[A good move, that MMS took many years before.]
NEW: Ankush Das: How I Am Moving Away From Google's Ecosystem (It's FOSS, January 28, 2025)
One service at a time: DuckDuckGo for search engine, Proton Mail for email and calendar...
[MMS agrees with DuckDuckGo; but, should we move from Thunderbird?]
Hugh Cameron: China And Russia Forge Major Tech Collaboration To Challenge US. Nvidia No Longer World's Most Valuable Company, As $500-Billion Wiped Off Market Cap. (1-min. video; Newsweek, January 27, 2025)
The valuation of Artificial-Intelligence chip-making firm Nvidia has plummeted after the debut of Chinese AI chatbot DeepSeek caused panic and mass sell-offs in the wider tech sector. In what was set to be the largest single-day loss in stock market history, today Nvidia shed $537-Billion from its market cap, which stood at £2.9-Trillion as of Noon ET.
NVIDIA had fallen from the top position to third place in the global company rankings, with Apple holding the top spot, followed by Microsoft.
Theo Burman: What Is DeepSeek AI? All About Chinese ChatGPT Rival. (Newsweek, January 27, 2025)
DeepSeek AI, a rapidly emerging player in the artificial intelligence industry, is beginning to challenge U.S. control over the AI industry. Developed by the Chinese startup DeepSeek, the open-source AI chatbot has not only gained traction in China but has also captured the attention of global markets, including the U.S.
DeepSeek AI is the brainchild of Liang Wenfeng, a former hedge-fund manager who transitioned to AI development in 2023. The platform's flagship model, DeepSeek-R1, was launched this January and quickly climbed to the top of the U.S. Apple App Store, surpassing ChatGPT in downloads.
DeepSeek's appeal lies in its free-to-use model for consumers, underpinned by its R1 reasoning engine. This is said to integrate reinforcement learning to achieve high performance with minimal computational resources. DeepSeek-R1 claims to rival OpenAI's o1 model in reasoning and mathematical problem-solving, and the platform generates Python code more effectively than ChatGPT.
Unlike OpenAI, which charges $20 to $200 per month for its services, DeepSeek offers its platform for free to individual users and charges only $0.14 per million tokens for developers. This stark contrast has made DeepSeek popular with small businesses and developers.
The progress of DeepSeek has partly been credited to the company's unorthodox solutions to geopolitical challenges. For example, U.S. export controls in October 2022 threatened to severely curtail Chinese development of AI. However, DeepSeek had stockpiled 10,000 of Nvidia's H100 chips and used the stockpile to continue work, though the export controls remain a challenge.
Warren Buffett Compares AI To Nuclear Weapons In Stark Warning. (CNN, May 6, 2024)
"We let a genie out of the bottle when we developed nuclear weapons. AI is somewhat similar - it's part-way out of the bottle. Scamming is going to be the growth industry of all time!"


Trump Vs. Our - and Our Government's - Internet:

Jon Brodkin: Elon Musk's Starlink Benefits As Trump Admin Rewrites Rules For $42B Grant Program. Trump Admin Decides Fiber Internet Won't Be Prioritized In BEAD Grant Program. (Ars Technica, March 6, 2025)
The Trump administration is eliminating a preference for fiber Internet in a $42.45-Billion broadband deployment program, a change that is expected to reduce spending on the most-advanced wired networks while REdirecting more money to Elon Musk's Starlink and other non-fiber Internet service providers. One report suggests Starlink could obtain $10-Billion to $20-Billion under the new rules.
Secretary of Commerce Howard Lutnick criticized the Biden administration's handling of the Broadband Equity, Access, and Deployment (BEAD) program in a statement yesterday. Lutnick said that "because of the prior Administration's woke mandates, favoritism towards certain technologies, and burdensome regulations, the program has not connected a single person to the Internet and is in dire need of a readjustment."
The program has been on hold since the change in administration, with Senator Ted Cruz (R-Texas) and other Republicans seeking rule changes. In addition to demanding an end to the fiber preference, Cruz wants to kill a requirement that ISPs receiving network-construction subsidies provide cheap broadband to people with low incomes. Cruz also criticized "unionized workforce and DEI labor requirements; climate change assessments; excessive per-location costs; and other central planning mandates".
[It was about to provide affordable broadband to many, until Cruz and others stopped it. The above is MAGA distraction talk, while diverting yet-more public money and private data to their already-wealthy friends.]
Ken Klippenstein: Leakers Declare War On Trump. (Substack, January 23, 2025)
Trump's attack on DEI triggers resistance, including against Elon Musk.
In the past 24 hours, over two dozen people from across the federal government leaked to me various internal directives and memos killing their agencies' DEI programs. One angry official even sent me Elon Musk's new official White House email address (I verified the address, belonging to the Executive Office of the President, by sending an email which didn't bounce back.)
In fact, I've gotten more leaked documents in the past day than I've gotten on any other day ever - and leaks for me were already so commonplace that someone even made a rap about it.
Government workers are angry, or some in the rank-and-file, anyway. The documents tell a story both of resistance (by those who leaked them) and obedience (by those who wrote them).
Casey Newton: Meta (Mark Zuckerman) Just Flipped OFF The Switch That PREVENTS Misinformation From Spreading In The United States. (Platformer, January 14, 2025)
The company built effective systems to reduce the reach of fake news. Last week, it shut them down.
Last week, Meta announced a series of changes to its content moderation policies and enforcement strategies designed to curry favor with the incoming Trump administration. The company ended its fact-checking program in the United States, stopped scanning new posts for most policy violations, and created carve-outs in its community standards to allow dehumanizing speech about transgender people and immigrants. The company also killed its diversity, equity and inclusion program.
Behind the scenes, the company was also quietly dismantling a system to prevent the spread of misinformation. When the company announced on Jan. 7 that it would end its fact-checking partnerships, the company also instructed teams responsible for ranking content in the company's apps to stop penalizing misinformation.
[Can such things be? Sadly, more and more!]
NEW: Ashley Belanger: Siri "Unintentionally" Recorded Private Convos; Apple Agrees To Pay $95M. (Ars Technica, January 2, 2025)
Apple users may get $20 each for up to five Siri-enabled devices.
Apple has agreed to pay $95-Million to settle a lawsuit alleging that its voice-assistant Siri routinely recorded private conversations that were then shared with third parties and used for targeted ads.
In the proposed class-action settlement - which comes after five years of litigation - Apple admitted to no wrongdoing. Instead, the settlement refers to "unintentional" Siri activations that occurred after the "Hey, Siri" feature was introduced in 2014, where recordings were apparently prompted without users ever saying the trigger words, "Hey, Siri".
Sometimes Siri would be inadvertently activated, a whistleblower told The Guardian, when an Apple Watch was raised and speech was detected. The only clue that users seemingly had of Siri's alleged spying was eerily-accurate targeted ads that appeared after they had just been talking about specific items like Air Jordans or brands like Olive Garden, Reuters noted (claims which remain disputed).
"Siri has been engineered to protect user privacy from the beginning", Apple's spokesperson told Ars. "Siri data has never been used to build marketing profiles, and it has never been sold to anyone for any purpose. Apple settled this case to avoid additional litigation, so we can move forward from concerns about third-party grading that we already addressed in 2019. We use Siri data to improve Siri, and we are constantly developing technologies to make Siri even more private." Additionally, in 2019, Apple made changes to beef up Siri privacy, including defaulting to never retain audio recordings from Siri interactions.
It's currently unknown how many customers were affected, but if the settlement is approved, the tech giant has offered up to $20 per Siri-enabled device for any customers who made purchases between September 17, 2014, and December 31, 2024. That includes iPhones, iPads, Apple Watches, MacBooks, HomePods, iPod touches, and Apple TVs, the settlement agreement noted. Each customer can submit claims for up to five devices. A hearing when the settlement could be approved is currently scheduled for February 14. If the settlement is certified, Apple will send notices to all affected customers. Through the settlement, customers can not only get monetary relief but also ensure that their private phone calls are permanently deleted.
While the settlement appears to be a victory for Apple users after months of mediation, it potentially lets Apple off the hook pretty cheaply. If the court had certified the class action and Apple users had won, Apple could've been fined more than $1.5-Billion under the Wiretap Act alone, court filings showed.
[Little Sister is watching you - but may not be tattling as much as she used to do.]


NEW: Salvador Rodriguez: How Bluesky Grew From A Twitter Side-Project To An X Competitor.
(14-min. YouTube video; CNBC, January 19, 2025)
Not many people had heard of Bluesky, when the Twitter side-project made its debut as a separate company in 2021. The decentralized social-media platform initially flew under the radar, but user numbers skyrocketed after the U.S. election in November. This was largely because many of X's users fled to Bluesky, as they were unhappy with some of the changes that Elon Musk made to Twitter after he acquired it in 2022 and later renamed it X. Bluesky now has over 27-million users, but whether it can continue its rapid growth and compete with the likes of Musk's X and Meta and Mark Zuckerberg's Threads remains to be seen.

Linux and other FOSS:

NEW: Clem·63: Beta For Linux Mint 22.2, "Zara" (Linux Mint Blog, July 14, 2025)
The team is working on a BETA release for Linux Mint 22.2. This new version introduces an HWE kernel, fingerprint authentication, theme updates, accent color support and improved libAdwaita compatibility. Work also continues in the Cinnamon edition, to make input methods and keyboard layouts compatible with Wayland. Packages and projects are being finalized. Pull requests are being merged. There is no set date for the release but we're hoping to get the BETA out by the end of July or the beginning of August.

Fotocx (was Fotoxx), a favorite FOSS application for Photo Editing/Management/Presentation:

Michael Cornelison: Fotocx 25.5 Is Now Available. (Fotocx, July 10, 2025)
Michael Cornelison: Fotocx 25.2 Is Now Available. (Fotocx, July 10, 2025)
Minor adjustments to Fotocx 25.1, the recent major release.
Michael Cornelison: Fotocx 25.1 Is Now Available. (Fotocx, July 1, 2025)
A new release of Fotocx, our favorite photo editor and collection manager! Click on its Subject (above) to see its Changes List. Click on and browse its main web page, which is updated throughout, and see its links to more video demos, etc.
NEW: The New Year Brings Fotocx 25.0 - But NOT 25.1, Etc. (Kornelix.net, January 1, 2025)
[Our favorite FOSS photo-editing and -management app just got even better! Check it out and download it from Kornelix.net.
NOTE:
Many software repositories offer "newer versions". Mike Cornelison, the creator of Fotocx, says those versions are NOT coordinated with him; HIS latest release is Fotocx 25.0, and that's the one we recommend.]
Mike Cornelison: Fotocx 24.40 Is Released. (Kornelix.net, June 5, 2024)
[A favorite app! This intro has before/after examples for every Fotocx action - and you can enlarge them for more information.]


Steven J. Vaughan-Nichols: The Year Of The European Union Linux Desktop May Finally Arrive. True Digital Sovereignty Begins At The Desktop. (with links to more; The Register, June 27, 2025)
Opinion: Microsoft, tactically admitting it has failed at talking all the Windows 10 PC users into moving to Windows 11 after all, is – sort of, kind of – extending Windows 10 support for another year. For most users, that means they'll need to subscribe to Microsoft 365. This, in turn, means their data and meta-information will be kept in a US-based datacenter. That isn't sitting so well with many European Union (EU) organizations and companies. It doesn't sit that well with me or a lot of other people either.
A few years back, I wrote in these very pages that Microsoft didn't want you so much to buy Windows, as to subscribe to its cloud services and keep your data on its servers. If you wanted a real desktop operating system, Linux would be almost your only choice. Nothing has changed since then, except that folks are getting a wee bit more concerned about their privacy, now that President Donald Trump is in charge of the US. You may have noticed that he and his regime love getting their hands on other people's data.
Privacy isn't the only issue. Can you trust Microsoft to deliver on its service promises under American political pressure? Peter Ganten, chairman of the German-based Open-Source Business Alliance (OSBA), opined that these sanctions ordered by the US which he alleged had been implemented by Microsoft "must be a wake-up call for all those responsible for the secure availability of state and private IT and communication infrastructures."
In short, besides all the other good reasons for people switching to the Linux desktop:
- Security
- Linux is now easy to use.
- Thanks to Steam, you can do serious gaming on Linux.
Privacy has become much more critical. That's why several EU governments have decided that moving to the Linux desktop makes a lot of sense.
[And, he points out, much of that motion already is underway.]
NEW: Joey Sneddon: Linux Mint 22.2 Modernises Its Default Theme. (preview screenshots; OMG Ubuntu, May 8, 2025)
More details on the makeup of the upcoming Linux Mint 22.2 release (due to be released in late July or early August) have been revealed.
New Codename: Linux Mint 22.2 has been officially named "Zara", continuing distro-lead Clem's codename convention of choosing female names in (somewhat) alphabetical order for each new version.
Linux Mint 22.2 Goes Bluer: Linux Mint's default "Mint Y" theme is instantly recognisable: big slabs of grey, punctuated by colourful accents; that's not changing.
What is changing, is how that grey looks. The team is introducing a steely-blue tint to the grey base in Mint-Y in an effort to make it look more modern and a tad metallic, following the likes of Apple, Firefox and GNOME.
There is another reason why Linux Mint is following the crowd. It has to do with its hitherto-stated nemesis: libadwaita.
Maybe Libadwaita Isn't That Bad: Linux Mint makes it easy to install apps from Flathub. Its Software Manager tool is plugged into Flathub (Mint pre-configures Flathub to hide unverified apps by default) so that its users have access to the thousands of apps available there - and a huge number of those apps use GTK4libadwaita.
Libadwaita is GNOME's UI toolkit. While it standardises the look, layout and behaviour of GTK4 applications, it intentionally limits the range of theming options (for distro-makers and end-users) compared to previous GTK versions. Libadwaita is also a predominantly grey theme like Mint-Y, so the subtle bump to blue should help improve visual harmony when running modern GTK4 apps alongside Mint's (preferred) GTK3 ones.
Linux Mint 22.2 also tweaks the XDG Desktop Portal XApp to support accent colours, ensuring that the choice of accent colour set by Cinnamon, MATE and Xfce desktops is picked up by and reflected in the UI of GTK4/libadwaita Flatpak apps. Nice!
Sustainable Compromise: Rather than continuing to downgrade or fork pre-GTK4 apps (as Linux Mint 22 does), a sustainable compromise is being explored. Mint-X and Mint-Y themes gain custom libadwaita stylesheets, and changes have been made to the system libadwaita package to tell it to not use its own stylesheet.
The Result: a pragmatic approach
to the way modern apps look on Mint.
NEW: New Features In Linux Mint 22.1, "Xia" (Linux Mint, December 12, 2024)
Now in beta, Linux Mint 22.1 is a long-term-support release which will be supported until 2029. It comes with updated software and brings refinements and many new features to make your desktop experience more comfortable.
First Router Designed Specifically For OpenWrt Released. The New OpenWrt One Is On Sale Now For $89 - Ultimate Gift For Right-To-Repair Enthusiasts.
(Software Freedom Conservancy, November 29, 2024)
Today, we at SFC, along with our OpenWrt member project, announce the production release of the OpenWrt One. This is the first wireless Internet router designed and built with your software freedom and right-to-repair in mind. The OpenWrt One will never be locked down and is forever unbrickable. This device services your needs as its owner and user. Everyone deserves control of their computing. The OpenWrt One takes a great first step toward bringing software rights to your home: you can control your own network with the software of your choice, and ensure your right to change, modify, and repair it as you like.
The OpenWrt One demonstrates what's possible when hardware designers and manufacturers prioritize your software right-to-repair; OpenWrt One follows the requirements of the copyleft licenses of Linux and other GPL'd programs. This device provides the fully copyleft-compliant source code release from the start. Device owners have all the rights as intended on Day 1; device owners are encouraged to take full advantage of these rights to improve and repair the software on their OpenWrt One.
[Big news, indeed! Years from now, most consumers will understand.]
Michael Krümpel: "Best LINUX Distro? The Truth Is Out There." (6-min. YouTube video; FOSS & Linux Journal, November 22, 2024)
[One of over 100 FOSS & Linux videos by Michael Krümpel!]
Miriam Bastian: Free Software Is Vital For The Public And State-Run Infrastructure Of A Free Society. (Free Software Foundation, November 19, 2024)
An Austrian petitioner succeeded in realizing what the U.S. government failed to see:
The way that governments get hooked on proprietary software tends to be predatory in nature, often based on offering gratis or low-cost samples - only to jack up prices and take away control after a government is dependent on non-free software. This story of trapping governments into using proprietary software is a known strategy by industry giants such as Microsoft.
The great thing is that free software can at the same time help enhance transparency, sustainability, and digital sovereignty of governments.
Transparency: No government should force its citizens to use non-transparent software, where no one can check what it really does. Free software allows its users to study the source code and thereby learn if the software is actually doing what it is supposed to do.
Sustainability: Free software is indispensable for the Right to Repair. It can considerably reduce e-waste, because devices can run much longer when we're able to modify or replace their pre-loaded software with free software. It can extend the life of hardware, even once the seller decided to no longer maintain the pre-loaded software.
Digital sovereignty: Every government must maintain control over its computing, and not cede control to the proprietary products of companies. Government entities need to be able to run the software that powers their processes as required, not as a company dictates, and be able to modify the software if it doesn't serve as needed. In addition, they should be able to copy and share public software with their citizens and with other groups and organizations serving the public interest. Only free software grants all these freedoms.
On top of the above, there are practical advantages of free software such as the fact that it can increase interoperability, support local and small businesses, and reduce costs.
When a government, local or country-wide, finances the development of software with taxpayer money, it has an obligation to release it as free software!
NEW: Abhishek Prakash: Beginner's Guide To Install And Use Conky In Ubuntu Linux - And Linux Mint, Etc.
(It's FOSS!, November 19, 2024)
You might have seen such a screenshot of a Linux desktop in various discussion forums. And you may wonder how that guy displayed CPU, memory and other information on the desktop. The answer lies in one word: Conky. In this tutorial, I'll teach you the essentials about using Conky to customize and beautify your Linux desktop.
Conky is a lightweight system monitor available on Linux and BSD. It can display the system information and statistics such as CPU consumption, disk usage, RAM utilization, network speed, etc. in an elegant way. All the information is displayed on top of your wallpaper. It gives your desktop a live wallpaper feel.
Conky is extremely configurable, and you can change every aspect of it by modifying its configuration file. But the complex way of installing and configuring Conky usually scares away the Linux beginners. Don't worry! You can still use Conky easily, thanks to a GUI tool called Conky Manager. The original Conky-Manager project was developed by Tony George, who has given us friendly tools like Aptik and Timeshift to backup your Linux installations.
Jose Enrico: New iOS 18.1 Feature Drives Cops Crazy: Secret "Inactivity Reboot" Locks Up iPhone After Four Days. (Tech Times, November 12, 2024)
Cryptographer Matthew Green said the security feature of lock-down is pretty nice; moreover, since iPhones tend to lock up often, fewer unauthorized users will be able to access stored information even if they can get hold of the device physically.
Is it a security enhancement or a law-enforcement setback?
[YES.]
NEW: Brian Livingston: Perplexity Is 10 Times Better Than Google. (AskWoody Newsletter, November 11, 2024)
The chat-bot wars are well underway, and one result is that I find myself using the new, free Perplexity AI-powered "answer engine" 99% of the time, falling back on Google Search only to look up a street address or some trivial factoid.
Google has served up its now-familiar list of 10 links for years. Perplexity also points you to several websites and videos. But its result pages begin with a well-written summary of what you'd learn if you actually visited all those links and vids.
It takes me less than a minute to read Perplexity's superb, human-quality summary of whatever information I asked about. By contrast, evaluating Google's links - after I've located them amidst the search giant's omnipresent ads - can take me 10 minutes. That's the basis of the "10-times" superlative in my headline. I'll be the first to admit that that isn't really scientific, but it's close.
The Internet Archive Improves Security Amid Cyberattacks. (The Internet Archive, October 24, 2024)
Starting earlier this month, the Internet Archive faced a DDoS attack which exposed patron email addresses and encrypted passwords, and our website's JavaScript was defaced. Further, hackers recently disclosed archive.org email and encrypted passwords to a transparency website, and also sent emails to patrons by exploiting a 3rd-party help-desk system.
After temporarily taking the site off-line to enhance security, we've resumed services to Archive.org, Open Library, Wayback Machine and Archive-It. As the latest security incident is analyzed and contained by our team, we are relaunching services as defenses are strengthened - please note that services may have limited availability as we continue maintenance.
The library community has been experiencing increased cyberattacks, with British Library, Seattle Public Library, Toronto Public Library, and now Calgary Public Library, all affected by online breaches of security systems. We stand with all libraries undergoing attacks and emphasize the need for preservation institutions now more than ever.
We are grateful for your patience and support as we work through these challenges. For ongoing updates, please follow our blog and official social media channels on X/Twitter, Bluesky, and Mastodon.
[Early articles on this hack-attack can be found at October 10, below.]
Stephen Council, Tech Reporter: The Random Bay-Area Warehouse That Houses One Of Humanity's Greatest Archives. (SFGate, October 23, 2024)
The Internet Archive, a San Francisco non-profit dedicated to recording the web and digitizing every published work, opened up its Richmond, California warehouse yesterday for an evening of tours and celebration. Guests and volunteers mingled around a tamale cart and open bar. Recently digitized movie film played on a large projector. And founder Brewster Kahle showed off shipping containers full of incredibly niche media, all saved for posterity.
Thomas Claburn: Linus Torvalds Affirms Expulsion Of Russian Maintainers. (The Register, October 23, 2024)
Today, Linux creator Linus Torvalds affirmed the removal last week of about a dozen Linux-Kernel maintainers associated with Russia, due to sanctions by USA and other countries.


**Dick, Jill and MMS: Happy Birthday Today, UBUNTU LINUX (b. October 20, 2004)!!**
Big thanks! And also to Debian Linux (b.1996), Linus Torvald's Linux Kernel (b.1991/1992/1994) and the preceding Minix, Multix,
Unix, etc. upon which Ubuntu Linux was built.
Also to Linux Mint (b.2006) and other distros that built upon that powerful Linux/Debian/Ubuntu thread - and thanks to ALL the fine Free, Open-Source Software (FOSS) that came before and since, and to the entire Linux community that keeps improving Linux and introducing it to others!



Matt Stoller: Monopoly Round-Up: Economic Termites Preparing to Feast? (Big, November 25, 2024)
Big business wants white-glove treatment in Donald Trump's America. One example, economic-termite Verisign, is illustrative of how Washington, D.C. is accommodating the new administration.
Does this particular corporation matter? Well, it's a billion-and-a-half dollars a year, so it's not trivial. But more importantly, multiply this kind of thinking across every agency in a Federal government that structures a $25-Trillion economy, and soon you're talking real money.
U.S. Department of Energy: Next-Gen Electronics Breakthrough: Harnessing The "Edge Of Chaos" For High-Performance, Efficient Microchips. (Sci-Tech Daily, October 18, 2024)
Researchers have discovered how to help electronic chips overcome signal losses, making chips simpler and more efficient.
A new study shows that electronic chips can be dramatically simplified by using the "Edge Of Chaos" effect. This allows long metal wires on a semi-stable material to act like superconductors and amplify signals - potentially transforming chip design by eliminating the need for transistor amplifiers, and reducing power usage.


Marie Boran: Hackers Claim "Catastrophic" Internet Archive Attack. (Newsweek, October 10, 2024)
A group linked to a pro-Palestinian hacktivist movement has launched a catastrophic cyberattack revealing the details of 31-million people, compromising their email addresses and screen names.
An account on "X" (ex-Twitter) under the name SN_BlackMeta claimed responsibility for the attack on The Internet Archive, a non-profit organization, and implied that further attacks were planned. The Internet Archive is known for its digital library and the Wayback Machine.
SN_BlackMeta has previously been linked to an attack against a Middle-Eastern financial institution earlier this year, and a security firm has linked it to a pro-Palestinian hacktivist movement.
Encrypted passwords were also exposed and, although these are relatively safe, users have been advised to change their passwords. And one expert has told Newsweek that people should avoid browsing or using any files obtained from the Internet Archive until it has declared an "all clear".
This breach was accompanied by a series of Distributed Denial-of-Service (DDoS) attacks that temporarily took down the organization's website, archive.org, yesterday and is continuing to affect the website currently. Wayback Machine is also inaccessible right now.
Lily Hay and Newman Kate Knibbs: Internet Archive Breach Exposes 31-Million Users. (Wired, October 9, 2024)
The hack exposed the data of 31-million users, as the embattled Wayback Machine maker scrambles to stay online and contain the fallout of digital - and legal - attacks.
An "illicit-JavaScript" pop-up on the Internet Archive proclaimed on Wednesday afternoon that the site had suffered a major data breach. Hours later, the organization confirmed the incident. Bleeping Computer, which first reported the breach, also confirmed the validity of the data.
Longtime security researcher Troy Hunt, who runs the data-breach-notification website Have I Been Pwned (HIBP) also confirmed that the breach is real. He said it occurred in September and that the stolen trove contains 31-million unique email addresses along with usernames, bcrypt password hashes, and other system data.


Moonchild, Pale Moon: CVE-2024-9680: "Use-After-Free In Animation Timeline" (Pale Moon forum, October 10, 2024)
Just in case people worry about the critical sec vulnerability:
"CVE-2024-9680: Use-after-free in Animation timeline"
As listed in MFSA 2024-15, it does not apply to Pale Moon or UXP.
[Good! I've been evaluating the Pale Moon web browser, and it's been working well!]
Joseph Cox: Thousands Of Internal AI-Training Datasets, Tools Exposed To Anyone On The Internet. (404 Media, October 9, 2024)
Thousands of machine-learning tools, including some belonging to large tech companies, are exposed to the open internet, letting anyone interact with them and potentially expose sensitive data. "In addition to the ML models themselves, the exposed data can include training datasets, hyperparameters, and sometimes even raw data used to build models", a security researcher said.
NEW: Brandon Vigliarolo: "Critical" CUPS Vulnerability Chain Easy To Use For Massive DDoS Attacks. Also, Rooting For Russian Cybercriminals, A New DDoS Record, Sneaky Linux Server Malware And More. (The Register, October 7, 2024)
The critical vulnerability in the Common Unix Printing System (CUPS) reported last week might have required some very particular circumstances to exploit, but Akamai researchers are warning the same vulnerabilities can easily be exploited for mass DDoS attacks.
Cory Doctorow: China Hacked Verizon, AT&T And Lumen Using CALEA, The FBI's Backdoor. (Pluralistic, October 7, 2024)
State-affiliated Chinese hackers penetrated AT&T, Verizon, Lumen and others; they entered their networks and spent months intercepting U.S. traffic – from individuals, firms, government officials, etc – and they did it all without having to exploit any code vulnerabilities. Instead, they used the back door that the FBI requires every carrier to furnish.
In 1994, Bill Clinton signed CALEA into law. The Communications Assistance for Law Enforcement Act requires every US telecommunications network to be designed around facilitating access to law-enforcement wiretaps. Prior to CALEA, telecoms operators were often at pains to design their networks to resist infiltration and interception. Even if a telco didn't go that far, they were at the very least indifferent to the needs of law enforcement, and attuned instead to building efficient, robust networks.
Predictably, CALEA met stiff opposition from powerful telecoms companies as it worked its way through Congress, but the Clinton administration bought them off with hundreds of millions of dollars in subsidies to acquire wiretap-facilitation technologies. Immediately, a new industry sprang into being; companies that promised to help the carriers hack themselves, punching back doors into their networks. The pioneers of this dirty business were overwhelmingly founded by ex-Israeli signals-intelligence personnel, though they often poached senior American military and intelligence officials to serve as the face of their operations and liase with their former colleagues in law enforcement and intelligence.
Telcos weren't the only opponents of CALEA, of course. Security experts – those who weren't hoping to cash in on government pork, anyways – warned that there was no way to make a back door that was only useful to the "good guys" but would keep the "bad guys" out.
These experts were – then as now – dismissed as neurotic worriers who simultaneously failed to understand the need to facilitate mass surveillance in order to keep the nation safe, and who lacked appropriate faith in American ingenuity. If we can put a man on the Moon, surely we can build a security system that selectively fails when a cop needs it to, but stands up to every crook, bully, corporate snoop and foreign government. In other words: "We have faith in you! NERD HARDER!"
NERD HARDER! has been the answer ever since CALEA – and related Clinton-era initiatives, like the NSA's failed Clipper Chip program, which would have put a spy chip in every computer, and, eventually, every phone and gadget.
["Clinton signed into law" and "Clinton-era initiatives" do NOT signify a POLITICAL motivation - only that that's WHEN it happened. More likely, it was an FBI/NSA initiative that seemed to make more sense in those "We're smart, they're not!" less-computer-savvy times. But WHY has CALEA REMAINED ACTIVE into THIS YEAR??]


Ashley Belanger: Artist Appeals Copyright Denial For Prize-Winning AI-Generated Work. (lovely, uh, painting?; Ars Technica, October 7, 2024)
AI art may create a whole new world of copyright trolling, expert warns.
Jason M. Allen - a synthetic media artist whose Midjourney-generated work "Théâtre D'opéra Spatial" went viral and incited backlash after winning a state-fair art competition - is not giving up his fight with the U.S. Copyright Office.
Last fall, the Copyright Office refused to register Allen's work, claiming that almost the entire work was AI-generated and insisting that copyright registration requires more human authorship than simply plugging a prompt into Midjourney.
"Just as the advent of the camera ushered in a previously un-imagined art form, AI-assisted art holds the potential to do the same", Allen argued. "This evolution should be embraced as a positive development in the creative landscape. When photography first gained popularity, critics argued that it lacked skill and artistry; yet it has since become a highly-respected and valued art form."
[This author, and perhaps Allen, fail to point out that, under U.S. copyright law, the person who snapped the shutter CAN copyright the photo - easily, and for free. So, what's new?]
Kavita Iyer: Chinese Hackers Infiltrated Major U.S. Telecom Firms. (Techworm, October 7, 2024)
Chinese hackers reportedly infiltrated the networks of major U.S. telecommunications companies and potentially gained access to systems used by the federal government for court-authorized network wiretapping requests, raising concerns about national security risks and cyber espionage.
Matt Burgess and Dhruv Mehrotra: License-Plate Readers Are Creating A U.S.-Wide Database Of More Than Just Cars. (DuckDuckGo Security, October 3, 2024)
From Trump campaign signs to Planned Parenthood bumper stickers, AI-powered DRN Data (owned by Motorola Solutions) license-plate readers around the U.S. are creating searchable databases that reveal Americans' political leanings and more.
Dan Goodin: Thousands Of Linux Systems Infected By Stealthy Malware Perfctl Since 2021. (Ars Technica, October 3, 2024)
Thousands of machines running Linux have been infected by a malware strain that's notable for its stealth, the number of mis-configurations it can exploit, and the breadth of malicious activities it can perform, researchers reported today.
The malware has been circulating since at least 2021. It gets installed by exploiting more than 20,000 common misconfigurations, a capability that may make millions of machines connected to the Internet potential targets, researchers from Aqua Security said. It can also exploit CVE-2023-33426, a vulnerability with a severity rating of 10 out of 10 that was patched last year in Apache RocketMQ, a messaging and streaming platform that's found on many Linux machines.
The researchers are calling the malware Perfctl, the name of a malicious component that surreptitiously mines crypto-currency. The unknown developers of the malware gave the process a name that combines the perf Linux monitoring tool and ctl, an abbreviation commonly used with command-line tools. A signature characteristic of Perfctl is its use of process and file names that are identical or similar to those commonly found in Linux environments. The naming convention is one of the many ways the malware attempts to escape notice of infected users.
[Sigh; Linux gets its turn (see the articles below re Windows, Mac, etc.) This article contains many technical details, which should enable Linux developers to develop early fixes. We await more info as to which Linux tools are safe now, and when updates will become available for others.]
Ravie Lakshmanan: Google Adds New Pixel Security Features To Block 2G Exploits And Baseband Attacks. (The Hacker News, October 3, 2024)
Google has revealed the various security guardrails that have been incorporated into its latest Pixel devices to counter the rising threat posed by baseband security attacks.
The cellular baseband (i.e., modem) refers to a processor on the device that's responsible for handling all connectivity, such as LTE, 4G, and 5G, with a mobile phone cell tower or base station over a radio interface. "This function inherently involves processing external inputs, which may originate from untrusted sources", Sherk Chung and Stephan Chen from the Pixel team, and Roger Piqueras Jover and Ivan Lozano from the company's Android team said in a blog post shared with The Hacker News.
Ravie Lakshmanan: LockBit (aka Bitwise Spider) Ransomware And Evil Corp Members Arrested And Sanctioned In Joint Global Effort. (The Hacker News, October 3, 2024)
A new wave of international law enforcement actions has led to four arrests and the takedown of nine servers linked to the LockBit (a.k.a. Bitwise Spider) ransomware operation, marking the latest salvo against what was once a prolific financially-motivated group. This includes the arrest of a suspected LockBit developer in France (while on holiday outside of Russia), two individuals in the U.K. who allegedly supported an affiliate, and an administrator of a bulletproof hosting service in Spain used by the ransomware group, Europol said in a statement.
In conjunction, authorities outed a Russian national named Aleksandr Ryzhenkov (a.k.a. Beverley, Corbyn_Dallas, G, Guester, and Kotosel) as one of the high-ranking members of the Evil Corp cyber-crime group, while simultaneously painting him as a LockBit affiliate. Sanctions have also been announced against seven individuals and two entities linked to the e-crime gang.
The development, part of a collaborative exercise dubbed Operation Cronos, comes nearly eight months after LockBit's online infrastructure was seized. It also follows sanctions levied against Dmitry Yuryevich Khoroshev, who was revealed to be the administrator and individual behind the "LockBitSupp" persona.
NEW: Tommy Greene, Wired: Hurricane Helene Takes Ultra-Pure Quartz Mines Off-Line, Threatens Tech Supply Chains. (Ars Technica, October 2, 2024)
Millions of people across the U.S. South have gone without power or have been forced to evacuate, following days of extreme downpours brought on by Hurricane Helene. North Carolina has borne the brunt of the devastation, with the state accounting for a third of all recorded fatalities to date. And as relief operations get underway, the eyes of the world are on a small town of about 2,000 in the western part of the state.
Spruce Pine, in Mitchell County, sits about an hour northeast of Asheville and is home to the world's biggest-known source of ultra-pure quartz - often referred to as high-purity quartz (HPQ). This material is used for manufacturing crucibles, on which global semiconductor production relies, as well as to make components within semiconductors themselves.
NEW: arindam: Mozilla's Mis-Step Costs Firefox A Top Ad-Blocker Add-On: uBlock Origin Lite. (uBOL screenshot; DebugPoint, October 2, 2024)
Mozilla has made headlines for mistakenly removing the popular uBlock Origin Lite (uBOL) add-on from its Firefox catalog. This controversial move, which stemmed from accusations of user-data collection and the use of minified code, raises significant questions about add-on governance and user trust in browser extensions.
A mis-step or misunderstanding? Last month, Mozilla's administrators claimed they had identified violations during a manual review of uBOL. They alleged that the add-on collected user data without consent, and contained automatically-generated code. However, this action drew immediate ire from Raymond Hill, the original creator of both uBlock Origin and uBOL. Hill strongly denied Mozilla's claims, asserting that uBOL had remained unchanged for over a year, boasting a mere 50 lines of easily-verifiable code.
The heart of the controversy lies in the misconception surrounding the code's structure. Minified or automatically-generated code can often trigger alarms in code-review processes, yet it doesn't necessarily equate to malicious intent. As Hill pointed out, the files in question had long been part of uBlock Origin, a trusted name in ad-blocking.
The fallout was swift. Hill announced he would cease support for the uBOL Firefox add-on, citing the "absurd" review process as a key factor in his decision.
Mozilla later apologized for the oversight and restored uBOL to its catalog, after a second review found no violations.
Yet, Hill's decision to withdraw support underscores a critical point: in the relationship between developers and platforms, clear communication is vital. Misunderstandings can lead to significant ramifications, including user frustration and developer disengagement.
For users, this means one of Firefox's most popular ad-blocking tools is now off the table. The loss stings especially for those seeking maximum performance and memory efficiency from uBOL's streamlined, declarativeNetRequest API implementation.
However, YOU CAN STILL SELF-HOST THE ADD-ON by downloading from the Releases page on GitHub: "Starting with uBOLite_2024.9.12.1004, the Firefox version of the extension will be self-hosted and can be installed from the Release section. The extension will auto-update when a newer version is available." – Hill
Lesson learned: This incident also shines a light on the challenges developers face when navigating the review processes of major platforms. As web technologies evolve, so too should the frameworks that govern them. Developers need a streamlined, fair, and transparent review process that fosters innovation, rather than stifling it. Mozilla and other platform players (Apple?) must evaluate whether their review processes strike the right balance between security and openness.
404 Media: AI Companies Are Opting You In By Default. (40-min. video and podcast; YouTube, October 2, 2024)
Over the past two weeks we've had a ton of stories where AI companies and others have opted users into AI data collection and processing by default. What the hell is going on??? They're all doing it at once!
Jason starts us off with how Udemy created a temporary 'opt-out window'. If you missed it, you're out of luck until next year. Then after the break, Sam and Joseph discuss similar stories with PayPal and LinkedIn. In the subscribers-only section, Sam talks about how a woman was essentially trapped in a driverless Waymo.
Joseph Cox: Someone Put Facial-Recognition Tech Onto Meta's Smart Glasses To Instantly Dox Strangers. (404 Media, October 2, 2024)
The technology, which marries Meta's smart Ray Ban glasses with the facial-recognition service Pimeyes and some other tools, lets someone automatically hop from face, to name, to phone number, home address and more.
Ravie Lakshmanan: Meta Fined €91-Million For Storing Millions Of Facebook And Instagram Passwords In Plaintext. (The Hacker News, September 30, 2024)
The Irish Data Protection Commission (DPC) has fined Meta €91-Million ($101.56-Million) as part of a probe into a security lapse in March 2019, when the company disclosed that it had mistakenly stored users' passwords in plaintext in its systems.
The investigation, launched by the DPC the next month, found that the social-media giant violated four different articles under the European Union's General Data Protection Regulation (GDPR). To that end, the DPC faulted Meta for failing to promptly notify the DPC of the data breach, document personal data breaches concerning the storage of user passwords in plaintext, and utilize proper technical measures to ensure the confidentiality of users' passwords.
Kevin Williams: Why It's Time To Take Warnings About Using Public Wi-Fi In Places Like Airports Seriously. (CNBC, September 29, 2024)
Over the years, travelers have repeatedly been warned to avoid public Wi-Fi in places like airports and coffee shops. Airport Wi-Fi, in particular, is known to be a hacker honeypot, due to what is typically relatively lax security. But even though many people know they should stay away from free Wi-Fi, it proves as irresistible to travelers as it is to hackers, who are now updating an old cyber-crime tactic to take advantage.
When in public places, experts say it's best to use alternatives to public WiFi networks. Use your phone's mobile hot-spot if possible. Users would be able to spot an attack, if through a phone relying on its mobile data and sharing it via a mobile hot-spot. You will know the name of that network since you made it and, to connect, you can put a strong password on it that only you know.
If a hot-spot isn't an option, a VPN can also provide some protection, as traffic should be encrypted to and from the VPN. Even if someone else can see the data, they can't do anything about it.
Dave Lee: OpenAI Is Nearly Free From Its "Do-Gooder Shackles". (The Seattle Times, September 26, 2024)
Remember this? "OpenAI is a non-profit artificial-intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact."
That was the founding mission statement of OpenAI. It's now quite out of date, so let's do some light editing. Maybe something like: "OpenAI is an artificial-intelligence research company. Our goal is to advance digital intelligence to generate financial return."
That is much clearer and far more accurate. Yesterday, Reuters first reported that the company is set to announce a restructuring, under which its non-profit board will lose control over the company's core business.
Cheryl Winokur Munk: How Apple And Microsoft's Trusted Brands Are Being Used To Scam You Online. (CNBC Cyber Report; September 25, 2024)
Online scammers are increasingly using trusted tech brands like Apple and Microsoft to trick people into divulging sensitive information. Now is a particularly dangerous time, experts say, as there tends to be a rise in scams when a new product or version is released, and Apple's new iPhone just debuted.
Among recent cyber-crime campaigns were ones claiming to represent Microsoft support and Apple Mac extended-warranty services.
Should AI Be Permitted In College Classrooms? Four Scholars Weigh In. (The Conversation, September 4, 2023)
The Conversation reached out to four scholars for their take on AI as a learning tool and the reasons why they will or won't be making it a part of their classes.
[And then, under what terms should AI be used in classrooms?]
James Bandler/ProPublica, A.C. Thompson/ProPublica and /Frontline, Karina Meier/Frontline: The Accelerationists' App: How Telegram Became The "Center Of Gravity" For A New Breed Of Domestic Terrorists (14-min. podcast; ProPublica and Frontline, September 3, 2024)
From attempting to incite racially-motivated violence to encouraging attacks on critical infrastructure, the alleged crimes planned and advertised by extremists on Telegram go far beyond the charges facing CEO Pavel Durov.
Senthilkumar Palani, Founder and Editor-in-chief of OSTechNix: Rust Maintainer For Linux Kernel Resigns. (35-min. YouTube video*; OSTechNix, August 29, 2024)
Wedson Almeida Filho, a maintainer of the Rust for Linux project, recently announced his resignation, citing "non-technical nonsense" as the reason for his departure. This decision follows a pattern of hostility from some Linux kernel developers toward the integration of the Rust programming language into the Linux kernel.
Filho's resignation was announced via Linux Kernel mailing list. In the email, Filho expressed his gratitude toward the Rust for Linux team, but stated that he no longer had the energy to deal with the negativity surrounding the project.* He concluded his message by saying that while he believes memory-safe languages like Rust are the future of kernel development, he fears that if Linux doesn't embrace this, a weaker kernel will eventually result.
*- This YouTube video, of a talk Filho gave at the 2024 Linux Kernel Summit, documents the significant pushback from some audience members regarding the use of Rust in the kernel.
[This good analysis concludes that some inherent collisions DO threaten the reliability of future Linux kernels, and that Filho's concerns SHOULD be effectively addressed.]
NEW: Text Fixer: Free Web Tools (TextFixer.com, August 29, 2024*)
Use these free online tools to quickly fix text, convert text to HTML, remove line breaks, generate random words, and do many other tasks.
[*- I just began using its "Capitalize Each Word Online" tool for new Subject lines (in this MINW webpage), to capitalize the first letter of every word in one's clip-boarded text. It's instant, easy to use, and lovely! Thank you, Scott!]
NEW: Penguinist: Encryption, Trust, And The Hidden Dangers Of Vendor-Controlled Data (LXer.com, August 27, 2024)
In the digital age, encryption is often touted as the ultimate safeguard of privacy. For many users, the knowledge that their data is encrypted from end to end offers a sense of security, a belief that their personal information is protected from prying eyes. However, this assumption overlooks a critical factor: who holds the keys to that encrypted data? When the keys are controlled by vendors like Google or Apple, what does that mean for user privacy? This article explores the hidden dangers of vendor-controlled encryption and the trust gap it creates, particularly for open-source users and developers.
Dawn Fallik: Struggling To Unlock Your Phone? You Might Have Lost Your Fingerprints. (Wired, August 26, 2024)
The absence of these identifying marks - which can be the result of excessive typing, manual work, chemotherapy, or sports - is becoming more of an issue in the age of biometrics.
Mara Johnson-Groh, 33, lost her fingerprints about a decade ago when she started rock climbing - particularly her middle and ring fingers, where a lot of pressure is exerted on the rock.
Jessica Klein: "Should Art Be Regulated By The SEC?": NFT Artists' New Lawsuit Seeks Answers. (Wired, August 26, 2024)
At issue is whether digital collectibles can be considered a security. "What the SEC has done directly affects my ability to make a living", one plaintiff says.

Linus Torvalds On His Linux, Then and Now:

Steven Vaughan-Nichols: Linux Turns 33: ​Linus Torvalds, On His "Just A Hobby" Operating System. (ZDNet, adjusted to August 25, 2024)
On August 25, 1991, Finnish graduate student Linus Torvalds drafted a brief note saying he was starting a hobby operating system. The world will never be the same.
[Happy Anniversary, Linux!!]
Chris Anderson: The Mind Behind Linux: Linus Torvalds (22-min. YouTube video, 6M+ views; TED Talk, May 3, 2016)
Linus Torvalds transformed technology twice - first with the Linux kernel, which helps power the Internet, and again with Git, the source-code management system used by developers worldwide. In a rare interview with TED Curator Chris Anderson, Torvalds discusses with remarkable openness the personality traits that prompted his unique philosophy of work, engineering and life. "I am not a visionary, I'm an engineer," Torvalds says. "I'm perfectly happy with all the people who are walking around and just staring at the clouds ... but I'm looking at the ground, and I want to fix the pothole that's right in front of me before I fall in."


Scott Ruecker Upgrades A Chromebook To A Mintbook:

NEW: Scott Ruecker: My Linux-Mint Tribute (LXer Linux News, August 23, 2024)
In my previous article, I wrote about How I Turned My Chromebook Into A Mintbook and all the fun entailed in doing it. Now it's been two months since I installed Linux Mint over ChromeOS Flex on my HP laptop, and it's going great. Since I have installed Mint, it's had zero crashes.
I have installed Proton and Steam and a bunch of the stuff that comes with them because you have to, and it all installed without a hitch. My laptop only has an integrated GPU, an Intel GeminiLake UHD Graphics 600 - nowhere near the greatest. But I have Wine, Steam and Proton and gotten on there and played a couple of games - even ones that were supposedly only going to work on Windows - and everything worked just fine.
Along with that, I have installed and un-installed a few other programs and different terminals and stuff. And they have all worked, every time. It's just amazing, my laptop is everything I want it to be and more. My laptop never overheats, never freezes, and every update goes flawlessly. Upgrading from my original Linux Mint 21.3 to 22 took about an hour, and one reboot later I was running the latest version of Mint. I love it!
[As do we!]
NEW: Scott Ruecker: How I Turned My Chromebook Into A "Mintbook"! (LXer Linux News, July 8, 2024)
In my previous article, I talked about how I got a nice Chromebook a couple of months back, but it wouldn't let me install any apps to it no matter what I tried. I called HP support (it's an HP) a few times and they had no idea what was going on - which kinda ticked me off, because I wanted to be able to take advantage of all the cool things I had heard it could do.
I had burned the .ios of the latest Linux Mint 21.3 onto a USB stick, and I ran it live a few times. Everything worked fine; it saw my WiFi card, HD, graphics worked great, everything! So I knew that, if I installed it, that also should work just fine. I burned it by installing an extension onto Chrome called the "Chromebook Recovery Extension Tool".
Finally, about a week ago, I plugged in the USB, installed Mint right over ChromeOS, and in less than 30 minutes I had myself a brand new Linux Mint Laptop! It's an HP, 14", 120-gig HD, 16-gig RAM, Intel Celeron 4-core 2.6gh, Intel Gemini Lake GPU with a camera, microphone, USB 2.0 and 3.0 ports, a serial port and a micro-SD slot. Plus I got a 64-gig micro-SD drive to save files to, for an external backup - along with the 64-gig USB stick I had off to the side.
I went through these two links worth of tweaks:
- 21 Best Things To Do After Installing Linux Mint – Linux Mint 21.3 Edition
- Speed Up Your Mint!

I did a lot of the tweaks, changed how much of my SWAP memory I used, and all kinds of stuff. So far overall they have worked. My machine is faster and, I assume, going into the future will be much more in tune. I still have some programs I want to install and things like that but so far, I'm in love! Just like I knew I'd be. One of the great things about the Cinnamon desktop is how much you can configure it. I've customized the look of the terminal, made it see-through, tweaked the color of the theme and all kinds of things.
One thing I haven't mentioned: Since installing Linux on my system, its battery life has gotten a lot better. It lasts almost twice as long as it did running ChromeOS. I'm amazed!
Why have a Chromebook that kinda works, when you can have a Linux Laptop that totally rocks!


Heather Vogell: DOJ Files Anti-Trust Suit Against RealPage, Maker of Rent-Setting Algorithm. (ProPublica, August 23, 2024)
Housing costs have emerged as a political issue in the presidential election, as the candidates travel the country making their cases. Last week, Vice-President Kamala Harris, the Democratic nominee for president, criticized landlords' use of price-setting software to determine rents. "Some corporate landlords collude with each other to set artificially-high rental prices, often using algorithms and price-fixing software to do it", she said. "It's anti-competitive, and it drives up costs."
Today, the Department of Justice and eight states sued the maker of rent-setting software that critics blame for sending rents soaring in apartment buildings across the country.
The lawsuit, which comes in the wake of a ProPublica investigation into the Texas company, accuses RealPage of taking part in an illegal price-fixing scheme.
Joel Khalili: Mike Lynch, "Britain's Bill Gates", Confirmed Dead In Superyacht Wreck. (August 22, 2024)
The serial software entrepreneur's passing comes only weeks after he was cleared of fraud by a jury in the U.S.
Will Knight: An "AI Scientist" Is Inventing And Running Its Own Experiments. (University of British Columbia in Vancouver, August 21, 2024)
Letting programs learn through "open-ended" experimentation may unlock remarkable new capabilities, as well as new risks.
Jason Dookeran: YouTube Is Losing The War Against Adblockers. (How-To Geek, August 19, 2024)
Not so long ago, it was easy to avoid YouTube ads by installing an adblocker. Now, YouTube has upped the ante, and its fight against adblockers could endanger the very creators who use the platform as their source of revenue. But how exactly is YouTube losing the war against adblockers?
[We are testing FreeTube...]
Paresh Dave: Google Has Unleashed Its Legal Fury On Hackers And Scammers. (Wired, August 15, 2024)
The tech giant says its "affirmative litigation" deters and raises awareness of bad behavior. Skeptics wonder whether these suits are too small a gesture.
Amit Katwala: This Code Breaker Is Using AI To Decode The Heart's Secret Rhythms. (Wired, August 15, 2024)
Inspired by his expertise in breaking ancient codes, Roeland Decorte built a smartphone app that continuously listens for signs of disease hidden in our pulse.
Santa Fe Institute: Shattering The Thermodynamic Limits: New Framework Reshapes Computing. (SciTechDaily, August 20, 2024)
New research offers a refined approach to calculating the energy costs of computational processes, potentially paving the way for more energy-efficient computing and advanced chip designs.
University of Texas at Austin: Artificial Intelligence Predicts Earthquakes With Unprecedented Accuracy. (SciTechDaily, August 20, 2024)
An AI algorithm developed by the University of Texas successfully predicted 70% of earthquakes during a trial in China, showcasing potential improvements in earthquake preparedness and risk management. Its performance in an international competition highlights its accuracy and adaptability.
"You don't see earthquakes coming", said Alexandros Savvaidis, a senior research scientist who leads the bureau's Texas Seismological Network Program (TexNet) - the state's seismic network. "It's a matter of milliseconds, and the only thing you can control is how prepared you are. Even with 70%, that's a huge result and could help minimize economic and human losses. It has the potential to dramatically improve earthquake preparedness worldwide."
Lily Hay Newman: The Slow-Burn Nightmare Of The National Public Data Breach (Wired, August 16, 2024)
The rolling disaster that is the breach of data broker and background-check company National Public Data is just beginning. While the breach of the company happened months ago, the company only acknowledged it publicly on Monday after someone posted what they claimed was "2.9-billion records" of people in the US, UK, and Canada, including names, physical addresses, and Social Security numbers. Ongoing analysis of the data, however, shows the story is far messier - as are the risks.
Lily Hay Newman: Nearly All Google Pixel Phones Are Exposed By Unpatched Flaw In Hidden Android App. (Wired, August 15, 2024)
An unpatched vulnerability in a hidden Android app called Showcase.apk could give an attacker the ability to gain deep access to your device. A fix is coming, but data-analytics giant Palantir says it's ditching Android devices altogether because Google's response to the vulnerability has been troubling.
Andy Greenberg: A Single Iranian Hacker Group Targeted Both Presidential Campaigns, Google Says. (Wired, August 14, 2024)
APT42, which is believed to work for Iran's Revolutionary Guard Corps, targeted about a dozen people associated with the Trump and Biden (now Harris) campaigns this Spring, according to Google's Threat Analysis Group.
Zack Whittaker: U.S. Appeals Court Rules Geofence Warrants Are Unconstitutional. (TechCrunch, August 13, 2024)
A federal appeals court has ruled that geofence warrants are unconstitutional, a decision that will limit the use of the controversial search warrants across several U.S. states. The Friday ruling from the U.S. Court of Appeals for the Fifth Circuit, which covers Louisiana, Mississippi and Texas, found that geofence warrants are "categorically prohibited by the Fourth Amendment", which protects against unwarranted searches and seizures.
Civil liberties and privacy advocates applauded the ruling, which effectively makes the use of geofence warrants unlawful across the three U.S. states for now.
Geofence warrants, also known as "reverse" search warrants, allow police to draw a shape on a map, such as over a crime scene, and demand that Google (or any other company that collects user locations) search its entire banks of location data for any phone or device that was in that area at a specific point in time.
But critics have long argued that geofence warrants are unconstitutional because they can be overbroad and include information on entirely innocent people.
The use of geofence warrants has rocketed in recent years, at one point amounting to about one-quarter of all U.S. legal demands the company received. Because a tech company like Google, Uber, Snap and others, collects and stores huge amounts of its users' location data and histories on its servers, this data can be obtained by law enforcement; if the data didn't exist, the problem would be moot.
Google said late last year that it would begin storing users' location data on their devices, making geofence warrants less useful for law enforcement.
NEW: Andrew Crocker: Federal Appeals Court Finds Geofence Warrants Are "Categorically" Unconstitutional. (Electronic Freedom Foundation, August 12, 2024)
In a major decision on Friday, the federal Fifth Circuit Court of Appeals held that geofence warrants are "categorically prohibited by the Fourth Amendment". Closely following arguments EFF has made in a number of cases, the court found that geofence warrants constitute the sort of "general, exploratory rummaging" that the drafters of the Fourth Amendment intended to outlaw. EFF applauds this decision because it is essential that every person feels like they can simply take their cell phone out into the world without the fear that they might end up a criminal suspect because their location data was swept up in an open-ended digital dragnet.
[Thank you, EFF!]
NEW: Trump's 271-Page File Of J.D. Vance's "Vulnerabilities" Hacked. (Daily Beast, August 10, 2024)
The Trump campaign announced it had been illegally hacked by "foreign sources" that leaked internal documents to news organizations. Iran is the prime suspect.
NEW: Wikimania 2024, August 7-10 In Katowice, Poland (program, etc.; Wikimedia, August 10, 2024)
Every year, hundreds of Wikimedians come together to celebrate free knowledge at the annual Wikimania global conference. The 19th edition of Wikimania happened in the city of Katowice, Poland from 7–10 August as a partnership between the Wikimedians of the Central and Eastern European region and the Wikimedia Foundation. It hosted free-knowledge leaders from around the world to discuss issues, report on new projects and approaches, build networks, and exchange ideas.
Kevin Purdy: Nova Launcher, Savior Of Cruft-Filled Android Phones, Is On Life-Support. (Ars Technica, August 9, 2024)
Nova Launcher feels the "massive" layoffs at the firm that acquired it in 2022.
Melissa Heikkilä: Artificial Intelligence: Google Is Finally Taking Action To Curb Non-Consensual Deepfakes. (MIT Technology Review, August 6, 2024)
Eight months on from deepfakes of Taylor Swift, we are seeing some encouraging changes, but a lot more work needs to be done to combat the problem.
Laura Bratton: Google Just Lost The Biggest Tech Anti-Trust Case In Decades. Judge Amit Mehta Said Google Monopolized The Search And Search-Advertising Markets. (Quartz, August 5, 2024)
A federal judge ruled today that Google monopolized online-search and general search-advertising markets, violating U.S. antitrust laws.
The DOJ sued Google in 2020 for allegedly monopolizing digital search, pushing out competitors such as DuckDuckGo and Microsoft's Bing. It was the first major tech antitrust lawsuit since U.S. v. Microsoft, a 1998 case that found Microsoft monopolized computer operating systems. The trial concluded in May.
"Google's dominance has gone unchallenged for well over a decade", wrote Judge Amit Mehta in his 277-page ruling Tuesday. "Google is a monopolist, and it has acted as one to maintain its monopoly. It has violated Section 2 of the Sherman Act."
Mehta said Google's exclusive agreements with companies like Apple have allowed it to hike prices for advertisers without any blowback. He wrote "there is no evidence that any rival constrains Google's pricing decisions" and that those unconstrained pricing decisions "have fueled Google's dramatic revenue growth and allowed it to maintain high and remarkably stable operating profits." The judge noted that nearly 90% of all search queries went through Google in 2020.
Regulators in the last several years have ramped up their antitrust scrutiny of Big Tech. This year, the U.S. Federal Trade Commission and Department of Justice have collectively filed major lawsuits against Amazon, Apple, and Meta, as well as a second antitrust case against Google, which alleges monopolistic practices in the digital-advertising market.
Mehta's ruling has major implications for Google and consumers across the globe. Google will likely have to make substantial changes to its search-engine business to comply with antitrust laws, which could open up a path for OpenAI's new search engine as well as other rivals.
Windsor Johnston: An Historic New Law Would Protect Kids Online And Hold Tech Companies Accountable. (NPR, August 3, 2024)
Advocacy group Parents for Safe Online Space supports proposed legislation that will hold tech companies accountable for limiting children's exposure to harmful online content. All 50 states have laws against bullying, and every state - except Wisconsin and Alaska - include specific references to cyber-bullying. Currently, there are no federal laws that criminalize cyber-bullying.
Josh Golin is the executive director of Fairplay, a nonprofit working to protect kids from marketing and dangerous online content from Big Tech. "For the first time ever, social media and other online platforms will have a legal responsibility to consider how they are impacting children", Golin says. "It's important for online platforms and members of Congress to recognize that regulating the use of social media for their kids has become overwhelming for families. No parent is looking for another full-time job. We need to put the responsibility back where it belongs, which is on these companies who are the ones controlling what these kids are seeing. We need to ensure that these kids are not being sent down such dangerous rabbit holes."
The legislation passed the Senate with strong bipartisan support earlier this week, and the measure now heads to the Republican-led House.
Atlanta Police Must Stop High-Tech Spying On Political Movements. (Electronic Freedom Foundation, August 1, 2024)
The Atlanta Police Department has been snooping on social media to closely monitor the meetings, protests, canvassing - even book clubs and pizza parties - of the political movement to stop "Cop City", a police training center that would destroy part of an urban forest. Activists already believed they were likely under surveillance by the Atlanta Police Department due to evidence in criminal cases brought against them, but the extent of the monitoring has only just been revealed. The Brennan Center for Justice has obtained and released over 2,000 pages of emails from inside the Atlanta Police Department chronicling how closely they were watching the social media of the movement. You can read all of the emails here.
Cory Doctorow: The U.N. Cybercrime Treaty Is A Nightmare! (Pluralistic, July 23, 2024)
U.N. treaties are dangerous, liable to capture by unholy alliances of authoritarian states and rapacious global capitalists. Yesterday, I heard from my EFF colleague, Katitza Rodriguez, about the U.N.'s Cybercrime Treaty, which is about to pass, and which is, to put it mildly, terrifying!
Cybercrime is transnational, making it hard for cops in any one jurisdiction to handle it. So there's a reason to think about formal international standards for fighting cybercrime. But that's not what's in the Cybercrime Treaty.
Here's a quick sketch of the significant defects in the U.N.'s Cybercrime Treaty:
1. The treaty has an extremely-loose definition of cybercrime, and that looseness is deliberate. In authoritarian states like China and Russia (whose delegations are the driving force behind this treaty), "cybercrime" has come to mean "anything the government disfavors, if you do it with a computer." "Cybercrime" can mean online criticism of the government, or professions of religious belief, or material supporting LGBTQ rights.
2. Nations that sign up to the Cybercrime Treaty will be obliged to help other nations fight "cybercrime" – however those other nations define it. They'll be required to provide surveillance data – for example, by forcing online services within their borders to cough up their users' private data, or even to pressure employees to install back-doors in their systems for ongoing monitoring.
3. These obligations to aid in surveillance are mandatory, but much of the Cybercrime Treaty is optional. What's optional? The human-rights safeguards. Member states "should" or "may" create standards for legality, necessity, proportionality, non-discrimination, and legitimate purpose. But even if they do, the treaty can oblige them to assist in surveillance orders that originate with other states that decided not to create these standards. When that happens, the citizens of the affected states may never find out about it. There are eight articles in the treaty that establish obligations for indefinite secrecy regarding surveillance undertaken on behalf of other signatories. That means that your government may be asked to spy on you and the people you love, they may order employees of tech companies to backdoor your account and devices, and that fact will remain secret forever. Forget challenging these sneak-and-peek orders in court – you won't even know about them.
4. Now here's the kicker: While this treaty creates broad powers to fight things governments dislike, simply by branding them "cybercrime", it actually undermines the fight against cybercrime itself. Most cybercrime involves exploiting security defects in devices and services – think of ransomware attacks – and the Cybercrime Treaty endangers the security researchers who point out these defects, creating grave criminal liability for the people we rely on to warn us when the tech vendors we rely upon have put us at risk.
5. This is the granddaddy of tech free-speech fights. Since the paper-tape days, researchers who discovered defects in critical systems have been intimidated, threatened, sued and even imprisoned for blowing the whistle. Tech giants insist that they should have a veto over who can publish true facts about the defects in their products, and dress up this demand as concern over security. "If you tell bad guys about the mistakes we made, they will exploit those bugs and harm our users. You should tell us about those bugs, sure, but only we can decide when it's the right time for our users and customers to find out about them." Instead, the Cybercrime Treaty creates new obligations on signatories to help other countries' cops and courts silence and punish security researchers who make these true disclosures, ensuring that spies and criminals will know which products aren't safe to use, but we won't (until it's too late).
A Cybercrime Treaty is a good idea, and even this U.N. Cybercrime Treaty could be salvaged. The member-states have it in their power to accept proposed revisions that would protect human rights and security researchers, narrow the definition of "cybercrime," and mandate transparency. They could establish member states' powers to refuse illegitimate requests from other countries.
[Cory's full article provides even more ammunition, so cite that if you can pass it on to other groups, news media, politicians, etc.]
NEW: Steven Vaughan-Nichols: Switzerland Federal Government Requires Releasing Its Software As Open Source. The United States Remains Reluctant To Work With Open Source, But European Countries Are Bolder. (ZDNet, July 23, 2024)
Several European countries are betting on open-source software. In the United States, eh, not so much. In the latest news from across the Atlantic, Switzerland has taken a major step forward with its "Federal Law on the Use of Electronic Means for the Fulfillment of Government Tasks" (EMBAG). This ground-breaking legislation mandates releasing open-source software (OSS) of the Federal government.
This new law requires all public bodies to disclose the source code of software developed by or for them unless third-party rights or security concerns prevent it. This "public money, public code" approach aims to enhance government operations' transparency, security, and efficiency.
Making this move wasn't easy. It began in 2011 when the Swiss Federal Supreme Court published its court application, Open Justitia, under an OSS license. The proprietary legal-software company Weblaw wasn't happy about this. There were heated political and legal fights for more than a decade. Finally, the EMBAG was passed in 2023. Now, the law not only allows the release of OSS by the Swiss government or its contractors, but also requires the code to be released under an open-source license "unless the rights of third parties or security-related reasons would exclude or restrict this".
Professor Dr. Matthias Stürmer, head of the Institute for Public Sector Transformation at the Bern University of Applied Sciences, led the fight for this law. He hailed it as "a great opportunity for government, the IT industry, and society". Stürmer believes everyone will benefit from this regulation, as it reduces vendor lock-in for the public sector, allows companies to expand their digital business solutions, and potentially leads to reduced IT costs and improved services for taxpayers.
Zak Doffman: Google Confirms Bad News For 3-Billion Chrome Users. (Forbes, updated July 23, 2024)
In a shock move, Google has suddenly confirmed that its long-awaited killing of Chrome's dreaded tracking cookies has just crashed and burned. The company was struggling to find an approach - agreeable to regulators - that balanced its own interests with those of the wider marketing industry, but no one expected this. Coming just days after Apple warned that Chrome is always watching, the timing could not be worse.
"We are proposing an updated approach that elevates user choice", Google teased yesterday, before dropping its bombshell. "Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing."
It likely means you can choose between tracking cookies, Google's semi-anonymous Topics API, and its semi-private browsing. You'll be able to change your choice - which will apply across the web - at any time. But there's still a catch - even this isn't yet agreed.
Zak Doffman: Apple Warns Millions Of iPhone Users: "Stop Using Google Chrome!" (Forbes, July 18, 2024)
Few relationships are quite as complicated as the one between Apple and Google. Cue Apple's creepy new attack ad on Google - with a clear message for its 1.4-billion users - stop using Chrome on your iPhone.
Why now? Google is on a mission to convert Safari users to Chrome. It currently relies on Apple's Safari to drive most search requests from iPhones - enabled by a financial arrangement between itself and Apple, whereby Google search is the default on Safari. But that arrangement could soon be curtailed by monopoly investigations in the US and Europe. And so Google is advancing Plan B.
Chrome only has a 30% install base across iPhone users; Google's target is to increase this to 50%, bringing another 300-million iPhone users inside its data tent. Apple obviously wants to stop this from happening. Those 300-million pairs of eyeballs generate serious online revenue and, as search changes through the introduction of on-device AI, it will become a retention-versus-conversion battleground.
That's why you may have seen Apple's Safari privacy billboards popping up in the city where you live. What started as a local campaign in San Francisco has now gone global. And while the ads don't mention Chrome, they don't need to. Nothing else matters. Between them, Safari and Chrome enjoy a greater than 90% market share on mobile devices. And on iPhone, it's a straight shootout between the two of them.
Laura Bratton: 4 Takeaways From The Google Anti-Trust Trial (Quartz, May 6, 2024)
The biggest anti-trust trial of the century concluded last week. What came out of it?
The United States Department of Justice (DOJ) and Google wrapped up closing arguments in a potentially-historic anti-trust lawsuit on Friday. The DOJ has released a 370-page slideshow supporting its case that Google holds a monopoly over the search-engine market - and offered a revealing look at the business maneuvers of the digital giant.
The DOJ sued Google in 2020 for allegedly monopolizing digital search, pushing out competitors such as DuckDuckGo and Microsoft's Bing. It's the first major tech anti-trust lawsuit since U.S. vs. Microsoft, a 1998 case that found Microsoft monopolized computer operating systems.
Regulators in the last several years have ramped up their anti-trust scrutiny of Big Tech, and this year the U.S. Federal Trade Commission and Department of Justice have collectively filed major lawsuits against Amazon, Apple, and Meta - as well as a second anti-trust case against Google, which alleges monopolistic practices in the digital-advertising market.
In its final compilation for the case, the DOJ airs out some compelling evidence about just how much Google controls in the search market. Google allegedly holds a near-90% share of the search market. According to the prosecution, Google has cornered 89% of the search-engine market. By contrast, the federal regulator said Microsoft's Bing holds a 5.5% share, Yahoo a 2.2% stake, and DuckDuckGo a 2.1% share. Google's search market share is even bigger for mobile devices: 98%. On desktop, it's a little less (84%). Businesses can begin to face anti-trust allegations when they hold more than 50% share of a given market.
That's why the Google case has been likened to U.S. vs. Microsoft, which alleged Microsoft held a similarly overwhelming share - more than 90% - of the computer operating system market. The fact that Microsoft lost that doesn't bode well for Google.
The Real Edwin: How Linux Saved A Fast-Food Giant (Mastodon/Seattle, May 17, 2010)
I am a Windows guy. I have always used Windows at home, work, school, everywhere with the exception of one Linux class at FIU. I have an A+ and MCTS in Windows Vista. I drink the kool-aid. But Linux saved me and the company I subcontract to (a fast-food giant) from near-total disaster. Last month McAfee posted a virus-definition update that flagged SVCHOST.EXE as a virus. This is my story of what happened.
[Lest we forget... Thanks to Bill Ricker and the Internet Archive for this good explanation of Burger King's fix of a CrowdStrike-like massive error by McAfee, 14 years before!
Ed Bott: Defective McAfee Update Causes Worldwide Meltdown Of Windows XP PCs. (ZDNet, April 21, 2010)
Oops, they did it again. Early this morning, McAfee released an update to its antivirus definitions for corporate customers that mistakenly deleted a crucial Windows XP file, sending systems into a reboot loop and requiring tedious manual repairs. It's not the first strike for the company, either. I've got details.
[Old confirmation (and expansion) of the Varanasi article, below.]
Lakshmi Varanasi: This Is The 2nd Time CrowdStrike CEO George Kurtz Has Been At The Center Of A Global Tech Failure. (Business Insider, July 20, 2024)
- A faulty update from CrowdStrike caused a global tech outage on Friday.
- CrowdStrike CEO George Kurtz has been down this road before.
- As CTO of McAfee in 2010, Kurtz was at the center of another similar tech debacle. A good portion of the world stood still on Friday, resulting in one of the most widespread tech outages of all time. The outage disrupted operations at major banks, airlines, retailers, and other industries after CrowdStrike, a cybersecurity giant used by Microsoft and others, pushed a faulty update. Many industries were still digging out of the debacle on Saturday. The fallout is expected to last weeks.
CrowdStrike owned up to its mistake, issuing an apology and a workaround on Friday. But it has yet to detail just how a destructive update could have been released without being caught by testing and other safeguards.
Naturally, blame has begun to target the man at the center of it all: CrowdStrike CEO George Kurtz. Tech industry analyst Anshel Sag pointed out that this isn't the first time Kurtz has played a major role in a historic IT blowout. On April 21, 2010, the antivirus company McAfee released an update to its software used by its corporate customers. The update deleted a key Windows file, causing millions of computers around the world to crash and repeatedly reboot. Much like the CrowdStrike mistake, the McAfee problem required a manual fix. Kurtz was McAfee's chief technology officer at the time. Months later, Intel acquired McAfee. And several months after that Kurtz left the company. He founded CrowdStrike in 2012 and has been its CEO ever since.
Zeba Siddiqui: CrowdStrike Update That Caused Global Outage Likely Skipped Checks, Experts Say. (Reuters, July 20, 2024)
Security experts said CrowdStrike's update of its widely-used cybersecurity software, which caused clients' computer systems to crash globally on yesterday, apparently did not undergo adequate quality checks before it was deployed.
The latest version of its Falcon Sensor software was meant to make CrowdStrike clients' systems more secure against hacking, by updating the threats it defends against. But faulty code in the update files resulted in one of the most-widespread tech outages in recent years for companies using Microsoft's Windows operating system. "It looks like the vetting or the sandboxing they do when they look at code, maybe somehow this file was not included in that or slipped through."
Problems came to light quickly after the update was rolled out early yesterday, and users posted pictures on social media of computers with blue screens displaying error messages. These are known in the industry as "blue screens of death."
Patrick Wardle, a security researcher who specialises in studying threats against operating systems, said his analysis identified the code responsible for the outage. The update's problem was "in a file that contains either configuration information or signatures", he said. Such signatures are code that detects specific types of malicious code or malware. "It's very common that security products update their signatures once a day... because they're continually monitoring for new malware and because they want to make sure that their customers are protected from the latest threats. The frequency of updates is probably the reason why CrowdStrike didn't test it as much."
It's unclear how that faulty code got into the update and why it wasn't detected before being released to customers.
"Ideally, this would have been rolled out to a limited pool first", said John Hammond, principal security researcher at Huntress Labs. "That is a safer approach to avoid a big mess like this."
Other security companies have had similar episodes in the past. McAfee's buggy antivirus update in 2010 stalled hundreds of thousands of computers.
But the global impact of this outage reflects CrowdStrike's dominance. Over half of Fortune 500 companies and many government bodies such as the top U.S. cybersecurity agency itself, the Cybersecurity and Infrastructure Security Agency, use the company's software.
Microsoft Says About 8.5-Million Of Its Devices Were Affected By CrowdStrike-Related Outage. (Reuters, July 20, 2024)
A global tech outage - that was related to a software update by cybersecurity firm CrowdStrike - affected nearly 8.5-million Microsoft devices. Microsoft said in a blog today: "We currently estimate that CrowdStrike's update affected 8.5-million Windows devices, or less than 1% of all Windows machines."
CrowdStrike has helped develop a scalable solution that will help Microsoft's Azure infrastructure accelerate a fix, Microsoft said, adding that the tech giant had worked with both Amazon Web Services and Google Cloud Platform to collaborate on the "most-effective approaches".
Air passengers worldwide faced delays, flight cancellations and headaches checking in, as airports and airlines were caught up in the IT outage that affected numerous industries ranging from banks to media companies.
Microsoft Pushed Back Against Regulators Before System Crash. (The Lever, July 19, 2024)
"Regulators should carefully avoid any intervention", the company said in response to a federal probe of security and inter-operability risks.
A little more than a year before Microsoft's systems crashed on Friday, creating global chaos in the banking, airline, and emergency-service industries, the company pushed back against regulators investigating the risks of a handful of cloud-services companies controlling the world's technological infrastructure, according to documents reviewed by The Lever.
Michael Kan: Banish The Blue Screen: How To Fix The CrowdStrike Bug On A Windows PC. (PC Magazine, July 19, 2024)
If you woke up Friday morning to a "Blue Screen of Death" on your Windows PC, you're not alone. A software bug from antivirus provider CrowdStrike has bricked countless Windows machines.
The good news is there's a fix, but it requires a few steps.
Ryan Browne: How A Software Update From Cyber Firm CrowdStrike Caused One Of The World's Biggest IT Blackouts. (CNBC, July 19, 2024)
A faulty update issued by cybersecurity company CrowdStrike led to a cascade effect among global IT systems today, with industries ranging from banking to airlines facing outages. Banks and health-care providers saw their services disrupted and TV broadcasters went offline as businesses worldwide grappled with the ongoing outage. Air travel has been hit hard, too, with planes grounded and services delayed.
Early today, CrowdStrike released the faulty software update, which caused Windows 365 to crash. So what happened, exactly? CNBC takes a look.
Aimee Picchi:
Microsoft 365 Outage Causes Widespread Airline Disruptions And Cancellations.
(CBS News, July 19, 2024)
Air travel is experiencing disruptions across the Globe this morning, due to an outage for customers of Microsoft 365 apps, including many major airlines. In the U.S., more than 3,000 flights within, into or out of the U.S. had been canceled as of 9 p.m. Eastern Time, while more than 11,400 flights had been delayed, according to FlightAware, a flight-tracking service. Airlines said the outage impacted the back-end systems they use to send key data, such as weight and balance information, required for planes to depart.
Air travelers posted images on social media of long lines at ticket counters, and "blue screens of death" - the Microsoft error page when its programs aren't working - on screens at various airports. The issue was caused by a software update sent from cybersecurity firm CrowdStrike to Microsoft, which it had identified in its systems and was working to resolve. "In a nutshell, this is a PR nightmare for CrowdStrike, and Microsoft and others get caught in this tornado along with millions of people currently stranded at airports around the globe", Wedbush analyst Dan Ives said in a report.
Travelers in Europe are also facing disruptions, with Lufthansa, KLM and SAS Airlines reporting issues. Switzerland's largest airport, in Zurich, said planes were not being allowed to land. In Australia, airline Jetstar canceled all flights from the Brisbane airport for the day, according to the BBC. One traveler in Scotland told The Guardian she paid $8,600 for new tickets back to the U.S., after her original flight was canceled due to the IT outage.
Rhian Hunt: CDK Cyberattack; Car-Dealers Losses Estimated At $1-Billion. (GM Authority, July 16, 2024)
A recent series of cyberattacks on CDK Global, which provides dealer-management-system (DMS) services to auto dealerships across the U.S., has led to nearly $1-Billion in losses to those vehicle dealers.
NEW: Rob Pegoraro: AT&T Data Breach Fallout: Watch Out For Targeted Texts, Spoofed Calls. (PC Magazine, July 12, 2024)
AT&T customers, reeling from today's news of a massive theft of calling and texting records, may now find themselves facing an onslaught of scam calls and texts targeted with that stolen data. The breach is described by AT&T as "phone call and text message records of nearly all of AT&T cellular customers from May 1, 2022 to October 31, 2022 as well as on January 2, 2023", taken from an AT&T workspace hosted by the cloud provider, Snowflake.
The risks go beyond the immediate privacy violation and whatever gut-punch feelings that might inflict. Those records don't include the content of any calls or texts, but that metadata - which for some victims includes cell-site location data - can still be enormously valuable for what it can reveal about the significant relationships in somebody's life.
That has long made phone metadata attractive to law-enforcement and national-security investigations; the National Security Agency collected it in bulk for years, until Congress put a halt to the practice. But scammers can exploit it, too.
NEW: Michael Kan: Hackers Resurrect Internet Explorer To Attack Windows PCs. (PC Magazine, July 10, 2024)
Scammers are abusing an IE-related bug to install malware on Windows 10 and Windows 11 PCs, according to cybersecurity firm Check Point.
[Consider switching those PCs to Linux, as we do.]

The new Digital-Afterlife Industry (DAI):

Taiwo Adepetun: Digital Ghosts: What We Leave Behind In An Online Afterlife (The Humanist, June 16, 2025)
As humanists, many of us hold a simple yet profound belief: When we die, we're gone. There is no celestial reunion, no reincarnation, no spiritual continuation. Death marks the final chapter of our individual story, and in that acceptance, we find meaning in making the most of our time here and now.
But in the digital age, death isn't quite the disappearing act it once was:
- Our photos linger.
- Our texts stay stored.
- Our browsing habits are archived.
- Social media profiles survive us, often morphing into digital memorials.
- In some cases, our voices, gestures, and patterns of speech can be mimicked by artificial intelligence.
We are witnessing the rise of "digital ghosts" – fragments of lives that continue to echo long after a body is buried or cremated. These phenomena raise fascinating and troubling questions for secular thinkers:
- What does it mean to grieve when the dead remain online?
- How should we ethically treat digital remnants of those who can no longer give or withdraw consent?
- And in an age of AI-enhanced memory, are we truly honoring the dead or engineering illusions?
[A year later, an interesting companion article to the below.]
NEW: Kathryn Hulick: Should We Use AI To Resurrect Digital "Ghosts" Of The Dead? (Science News, May 15, 2024)
Experts warn safeguards are necessary, as the digital-afterlife industry grows.
When missing a loved one who has passed away, you might look at old photos or listen to old voicemails. Now, with artificial-intelligence technology, you can also talk with a virtual bot made to look and sound just like them.
The companies Silicon Intelligence and Super Brain already offer this service. Both rely on generative AI, including large-language models similar to the one behind ChatGPT, to sift through snippets of text, photos, audio recordings, video and other data. They use this information to create digital "ghosts" of the dead to visit the living.
Called griefbots, deadbots or re-creation services, these digital replicas of the deceased "create an illusion that a dead person is still alive and can interact with the world as if nothing actually happened, as if death didn't occur", says Katarzyna Nowaczyk-Basińska, a researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge who studies how technology shapes people's experiences of death, loss and grief. She and colleague Tomasz Hollanek, a technology ethicist at the same university, recently explored the risks of technology that allows for a type of "digital immortality" in a paper published May 9 in Philosophy & Technology. Could AI technology be racing ahead of respect for human dignity? To get a handle on this, Science News spoke with Nowaczyk-Basińska.
NEW: Tomasz Hollanek and Katarzyna Nowaczyk-Basińska: Griefbots, Deadbots, Postmortem Avatars: On Responsible Applications Of Generative AI In The Digital-Afterlife Industry. (Springer-Nature, May 9, 2024)
To analyze potential negative consequences of adopting generative AI solutions in the digital afterlife industry (DAI), in this paper we present three speculative design scenarios for AI-enabled simulation of the deceased. We highlight the perspectives of the data donor, data recipient, and service interactant – terms we employ to denote those whose data is used to create "deadbots", those in possession of the donor's data after their death, and those who are meant to interact with the end product. We draw on the scenarios to map out several key ethical concerns posed by "re-creation services" and to put forward recommendations on the ethical development of AI systems in this specific area of application. The recommendations, targeted at providers of AI-enabled re-creation services, include suggestions for developing sensitive procedures for retiring deadbots, ensuring meaningful transparency, restricting access to such services to adult users only, and adhering to the principle of mutual consent of both data donors and service interactants. While we suggest practical solutions to the socio-ethical challenges posed by the emergence of re-creation services, we also emphasize the importance of ongoing interdisciplinary research at the intersection of the ethics of AI and the ethics of the DAI.


The AI Apocalypse: ALL Artificial-Intelligence Articles at Popular Science, TED, ...

Rex Huppke: "'South Park' Mocking Naked Trump = NOT FUNNY. Fake Obama Arrest Video = FUNNY!"? The Bar For Presidents Should Be Set Slightly Higher Than A Cartoon Famous For A Singing Piece Of Poop. (USA Today, July 25, 2025)
Thanks to "South Park" and its hilariously-graphic AI-depiction of President Donald Trump walking the desert naked, complete with talking genitalia, we're learning how our thin-skinned commander-in-chief defines comedy.
White House officials were outraged by the show's unflattering artificial-intelligence depiction of Trump, which is funny in itself, since the easily-triggered president is no stranger to making fake video "jokes".
On July 20, the actual president of the United States of America posted an AI-generated video of former Democratic President Barack Obama
being arrested, handcuffed and hauled away. That bit of dark, authoritarian humor is apparently a real hoot, and totally acceptable, given that Trump has not apologized or threatened to sue himself for $80-Bazillion, or whatever the going rate is for things that violate the Man-Child of Mar-a-Lago's sense of decency. (As I typed "sense of decency", my laptop crashed because the machine's processor rolled its eyes too hard.)
[Funny? Outrageous? YES.]
Brendan Morrow: White House: "'South Park' Hasn't 'Been Relevant For Over 20 Years'" (After The TV Show Airs Its TRUMP PARODY). (USA TODAY, July 24, 2025)
Trey Parker and Matt Stone aren't holding back. The "South Park" creators tore into President Donald Trump - and their bosses at Paramount - in the animated show's Season-27 premiere, which referenced everything from the company's controversial settlement with the president to its shock decision to cancel "The Late Show with Stephen Colbert." Comedy Central, where "South Park" airs, is owned by Paramount.
In response to the season premiere's Trump parody, the White House slammed the show, calling the series "fourth-rate" and irrelevant.
[In their following two paragraphs, do the "Trump-Reverse" Correction to see what it actually says - about TrumPutin. I've underlined a few glaring examples, and also bolded the most absurd.]
"The Left's hypocrisy truly has no end - for years they have come after 'South Park' for what they labeled as 'offense' content, but suddenly they are praising the show", White House spokesperson Taylor Rogers said in a statement provided today to USA Today. "Just like the creators of 'South Park', the Left has no authentic or original content, which is why their popularity continues to hit record lows."
The White House's statement continued, "This show hasn't been relevant for over 20 years and is hanging on by a thread with uninspired ideas in a desperate attempt for attention. President Trump has delivered on more promises in just six months than any other president in our country's history
[That last sentence easily wins our "TrumPutin's-Biggest-Lie-Yet Award". Well, unless its "delivered" means, "delivered to Putin and other intentional sponsors of his autocracy". Save this one, for posterity!]
- and no fourth-rate show can derail President Trump's hot streak."
[Feel free to replace "hot streak" with "disgraceful behavior" or other honest words of your choice.]
NEW: Jon Reed: Congress Is NOT Stepping Up To Regulate AI. Where Does That Leave Us Now? (CNet, July 22, 2025)
Lawmakers declined to stop states from regulating artificial intelligence, but the debate over rules for AI is just beginning.
When you turn on the faucet, you expect the water that comes out to be clean. When you go to the bank, you expect your money will still be there. When you go to the doctor, you expect they will keep your medical information private. Those expectations exist because there are rules to protect you. But when a technology arises almost overnight, the problems come first. The rules, you'd hope, would follow.
Right now, there's no technology with more hype and attention than artificial intelligence. Since ChatGPT burst on to the scene in 2022, generative AI has crept into nearly every corner of our lives. AI boosters say it's transformative, comparing it to the birth of the Internet or the Industrial Revolution in its potential to reshape society. The nature of work itself will be transformed. Scientific discovery will accelerate beyond our wildest dreams. All this from a technology that right now, is mostly just kind-of-good at writing a paragraph.
The concerns about AI? They're legion. There are questions of privacy and security. There's concerns about how AI impacts the climate and the environment. There's the problem of hallucination - that AI will completely make stuff up, with tremendous potential for misinformation. There are liability concerns: Who is responsible for the actions of an AI, or an autonomous system running off of one? Then there are the already-numerous lawsuits around copyright infringement related to training data.
Those are just today's worries. Some argue that a potential artificial intelligence smarter than humans could pose a massive, existential threat to humanity.
What to do about AI is an international debate. In Europe, the EU AI Act, which is currently being phased in, imposes guidelines on AI-based systems based on their risk to individual privacy and safety. In the US, meanwhile, Congress recently proposed barring states from enforcing their own rules around AI for a decade, without a national framework in place, until backing off during last-minute negotiations around Trump's big tax-and-spending bill.
[Remember when the USA led with such public safeguards?]
NEW: Hany Farid: How To Spot Fake AI Photos (13-min. YouTube video; TED, July 18, 2025)
How do you know if that shocking photo in your feed is real, or just another AI fake? Digital forensics expert Hany Farid explains how he helps journalists, courts and governments find structural errors in AI-generated images, offering four practical tips everyday individuals can use when facing the Internet’s war on reality. (Recorded at TED2025 on April 10, 2025)
[View this before you go back to browsing the Web, so you'll know:
- how AI enables cheaters to cheat, and
- how AI techies are catching them.
- why TrumPutin pushed an anti-privacy/anti-security/pro-AI bill, that lost,
- but he continues trying to prevent U.S. States from controlling AI-cheating,
- and then he aired this fake AI (6-min.; David Packman steps us through it, and the 8K+ Comments are spot-on),
- which was countered by this, (2-min. of the now-famous "South Park" fake), this and this and, in turn, TrumPutin's this (text; he doesn't need AI to lie. "The most-destructive president, Our country ever had!").
- and how, meanwhile, AI techies further enable their country's AI-cheats to evade identification.]
Mike Oitzman: Hugging Face Launches Reachy Mini Robot, As An Embodied AI Platform. (1-min. YouTube video, images, details; The Robot Report, July 11, 2025)
Pollen Robotics and Hugging Face have launched Reachy Mini, an open-source robot designed for enthusiasts, researchers, and builders to experiment with human-robot interaction, creative coding, and artificial intelligence (AI).
Standing at a compact 11 inches (27.9 cm) tall and 6.3 inches (16 cm) wide, and weighing a mere 3.3 pounds (1.5 kg), Reachy Mini is designed for accessibility and engagement. Its distinctive features include motorized head- and body-rotation, animated antennas for expressiveness, and multi-modal sensing capabilities through an integrated camera, microphones, and speakers. The companies said these features enable rich, AI-powered audio-visual interactions.
Pollen Robotics was acquired by Hugging Face in April 2025. At the time of the acquisition, Hugging Face intended to integrate its AI tools with Reachy's hardware. Reachy Mini appears to be the first result.
Reachy Mini can be ordered in two versions, at $300 and $450. Both versions are sold as kits, encouraging users to engage in the assembly process and deepen their understanding of the robot's mechanics. According to Hugging Face, the robot will offer 15-plus robot behaviors at launch - and its open-source design invites users to add and share many more robot behaviors to come.
[Users, relax; Reachy Mini does not have arms or legs. (Not yet.)]
NEW: Jon Reed: At The Last Minute, The Senate Yanked The Plan To Halt Enforcement Of State Artificial-Intelligence Laws From Trump's Big Tax-And-Spending Bill. Here's What That Means For Consumers. (CNet, July 5, 2025)
After months of debate, a plan in Congress to block states from regulating artificial intelligence was pulled from the big federal-budget bill this week. The proposed 10-year moratorium would have prevented states from enforcing rules and laws on AI if the state accepted federal funding for broadband access.
The issue exposed divides among technology experts and politicians, with some Senate Republicans joining Democrats in opposing the move. The Senate eventually voted 99-1 to remove the proposal from the bill, which also includes the extension of the 2017 federal tax cuts and cuts to services like Medicaid and SNAP. Congressional Republican leaders have said they want to have the measure on President Donald Trump's desk by July 4.
Tech companies and many Congressional Republicans supported the moratorium, saying it would prevent a "patchwork" of rules and regulations across states and local governments that could hinder the development of AI - especially in the context of competition with China. Critics, including consumer advocates, said states should have a free hand to protect people from potential issues with the fast-growing technology.
[Potential and existing issues - since the federal government now is easily bought out by corporations.]
"The Senate came together tonight to say that we can't just run over good state consumer protection laws", Sen. Maria Cantwell, a Washington Democrat, said in a statement. "States CAN fight robocalls, deepfakes and provide safe autonomous-vehicle laws. This also allows us to work together nationally to provide a new federal framework on artificial intelligence that accelerates US leadership in AI while still protecting consumers."
Not all AI companies are backing a moratorium
.
In a New York Times op-ed, Anthropic CEO Dario Amodei called it "far too blunt an instrument", saying the federal government should create transparency standards for AI companies instead. "Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed."

NEW: Anthropic C.E.O. Dario Amodei: Opinion: "Don't Let A.I. Companies Off The Hook!" (6-min. podcast; New York Times, June 5, 2025)
Picture this: You give a bot notice that you'll shut it down soon, and replace it with a different artificial-intelligence system. In the past, you gave it access to your emails. In some of them, you alluded to the fact that you've been having an affair. The bot threatens you, telling you that if the shutdown plans aren't changed, it will forward the emails to your wife.
This scenario isn't fiction. Anthropic's latest A.I. model demonstrated just a few weeks ago that it was capable of this kind of behavior.
Despite some misleading headlines, the model didn't do this in the real world. Its behavior was part of an evaluation where we deliberately put it in an extreme experimental situation to observe its responses and get early warnings about the risks, much like an airplane manufacturer might test a plane's performance in a wind tunnel.
We're not alone in discovering these risks. A recent experimental stress-test of OpenAI's o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyber-attacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons.
None of this diminishes the vast promise of A.I. I've written at length about how it could transform science, medicine, energy, defense and much more. It's already increasing productivity in surprising and exciting ways. It has helped, for example, a pharmaceutical company draft clinical-study reports in minutes instead of weeks and has helped patients (including members of my own family) diagnose medical issues that could otherwise have been missed. It could accelerate economic growth to an extent not seen for a century, improving everyone's quality of life. This amazing potential inspires me, our researchers and the businesses we work with every day.
But to fully realize A.I.'s benefits, we need to find and fix the dangers before they find us.
Every time we release a new A.I. system, Anthropic measures and mitigates its risks. We share our models with external research organizations for testing, and we don't release models until we are confident they are safe. We put in place sophisticated defenses against the most serious risks, such as biological weapons. We research not just the models themselves, but also their future effects on the labor market and employment. To show our work in these areas, we publish detailed model evaluations and reports.
But this is broadly voluntary. Federal law does not compel us or any other A.I. company to be transparent about our models' capabilities, or to take any meaningful steps toward risk reduction. Some companies can simply choose not to.
Right now, the Senate is considering a provision that would tie the hands of state legislators: The current draft of President Trump's policy bill includes a 10-year moratorium on states regulating A.I.
The motivations behind the moratorium are understandable. It aims to prevent a patchwork of inconsistent state laws, which many fear could be burdensome or could compromise America’s ability to compete with China. I am sympathetic to these concerns - particularly on geopolitical competition - and have advocated stronger export controls to slow China's acquisition of crucial A.I. chips, as well as robust application of A.I. for our national defense.
But a 10-year moratorium is far-too-blunt an instrument. A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.
A focus on transparency is the best way to balance the considerations in play. While prescribing how companies should release their products runs the risk of slowing progress, simply requiring transparency about company practices and model capabilities can encourage learning across the industry.
At the federal level, instead of a moratorium, the White House and Congress should work together on a transparency standard for A.I. companies, so that emerging risks are made clear to the American people. This national standard would require frontier A.I. developers - those working on the world's most powerful models - to adopt policies for testing and evaluating their models. Developers of powerful A.I. models would be required to publicly disclose on their company websites not only what is in those policies, but also how they plan to test for and mitigate national security and other catastrophic risks. They would also have to be upfront about the steps they took, in light of test results, to make sure their models were safe before releasing them to the public.
Anthropic currently makes such information available as part of our Responsible Scaling Policy, and OpenAI and Google DeepMind have adopted similar policies, so this requirement would be codifying what many major developers are already doing. But as models become more powerful, corporate incentives to provide this level of transparency might change. That's why there should be legislative incentives to ensure that these companies keep disclosing their policies.
Having this national transparency standard would help not only the public, but also Congress, to understand how the technology is developing, so that lawmakers can decide whether further government action is needed.
State laws should also be narrowly focused on transparency and not overly prescriptive or burdensome. If a federal transparency standard is adopted, it could then supersede state laws, creating a unified national framework.
We can hope that all A.I. companies will join in a commitment to openness and responsible A.I. development, as some currently do. But we don't rely on hope in other vital sectors, and we shouldn’t have to rely on it here, either.
This is not about partisan politics. Politicians on both sides of the aisle have long raised concerns about A.I. and about the risks of abdicating our responsibility to steward it well. I support what the Trump administration has done to clamp down on the export of A.I. chips to China and to make it easier to build A.I. infrastructure here in the United States. This is about responding in a wise and balanced way to extraordinary times. Faced with a revolutionary technology of uncertain benefits and risks, our government should be able to ensure we make rapid progress, beat China and build A.I. that is safe and trustworthy. Transparency will serve these shared aspirations, not hinder them.
[But what if A.I. canNOT remain safe and trustworthy? What goals will be prioritized by, say, a TrumPutin administration?]

NEW: Brad Templeton: Waymo's 6th-Generation Robotaxi Is Cheaper. How Cheap Can They Go? (Forbes, August 20, 2024)
Waymo, the Google-created robotaxi company, has revealed more details on their 6th generation vehicle, based on a body by China-based Geely/Zeekr and a new generation of custom sensors and software.
(Update: Waymo also announced today they have reached 100,000 fully autonomous trips per week in their service areas.)
In addition to the new vehicle, replacing the Jaguar i-Pace, the big change is a new sensor suite with 13 cameras, 4 LIDAR, 6 radar and many microphones. Waymo says this new platform offers more resolution, more range, more compute and does not reduce safety and, most interestingly, comes "at a significantly reduced cost".
This is both news and not news. It's always been expected that as robotaxi development progressed, costs would drop, and greatly. That's always the pattern with computers and electronics, though not the rule with cars. The software development has been very expensive, but deploys for free into the future. We've recently seen announcements of a $28,000 cost for Baidu's new robotaxi, and rumours that Tesla's delayed robotaxi concept car was derived from now-cancelled efforts to make a $25,000 low-end consumer electric car.
NEW: Mack DeGeurin: AI-Trained-On-AI Churns Out Gibberish Garbage. Eventually, It Collapses, "Poisoned With Its Own Projection Of Reality". (Popular Science, July 25, 2024)
Large language models, like those offered by OpenAI and Google, famously require vast troves of training data to work. The latest versions of these models have already scoured much of the existing Internet, which has led some to fear there may not be enough new data left to train future iterations. Some prominent voices in the industry, like Meta CEO Mark Zuckerberg, have posited a solution to that data dilemma: simply train new AI systems on old AI outputs.
But new research suggests that cannibalizing of past model outputs would quickly result in strings of babbling AI gibberish and could eventually lead to what's being called "model collapse". In one example, researchers fed an AI a benign paragraph about church architecture only to have it rapidly degrade over generations. The final, most "advanced" model simply repeated the phrase "black@tailed jackrabbits", continuously.
A study published in Nature this week put that AI-trained-on-AI scenario to the test. The researchers made their own language model, which they initially fed original, human-generated text. They then made nine more generations of models, each trained on the text output generated by the model before it. The end result in the final generation was nonessential surrealist-sounding gibberish that had essentially nothing to do with the original text. Over time and successive generations, the researchers say, their model "becomes poisoned with its own projection of reality".
[I.e.: Give AI enough time, and it will begin to sound like TrumPutin.]
Gerrit De Vynck: The AI Deepfake Apocalypse Is Here. These Are The Ideas For Fighting It. (Washington Post, April 5, 2024)
AI-generated images are everywhere. They're being used to make non-consensual pornography, muddy the truth during elections, and promote products on social media using celebrity impersonations.
Experts say the problem is only going to get worse. Today, the quality of some fake images is so good that they're nearly impossible to distinguish from real ones. In one prominent case, a finance manager at a Hong Kong bank wired about $25.6-Million to fraudsters who used AI to pose as the worker's bosses on a video call. And the tools to make these fakes are free and widely-available.
A growing group of researchers, academics and start-up founders are working on ways to track and label AI content.
Using a variety of methods and forming alliances with news organizations, Big Tech companies and even camera manufacturers, they hope to keep AI images from further eroding the public's ability to understand what's true and what isn't.
NEW: Katie Strick: Is The AI Apocalypse Actually Coming? What Life Could Look Like, If Robots Take Over. (The Standard/UK, May 31, 2023)
From job losses to mass-extinction events, experts are warning that AI technology risks opening a Pandora's Box of horrors if left unchecked. Are they right to be sounding the klaxon?
The year is 2050. The location is London - but not as we know it. GodBot, a robot so intelligent it can out-smart any human, is in charge of the United Kingdom - the entire planet, in fact - and just announced its latest plan to reverse global temperature rises: an international zero-child, zero-reproduction policy, which will see all human females systematically destroyed and replaced with carbon-neutral sex robots.
This chilling scenario is, of course, entirely fictional – though if naysayers are to be believed, it could become a reality in as soon as a few decades, if we humans don't act now. Last night, dozens of AI experts - including the heads of ChatGPT creator OpenAI and Google Deepmind - warned that AI could lead to the extinction of humanity and that mitigating its risk should be as much of a global priority as pandemics and nuclear war.
The statement, published on the website of the Centre for AI Safety, is the latest in a series of almost-hourly warnings of the "existential threat" that machines pose to humanity over recent months, with everyone from historian Yuval Noah Harari to some of the creators of AI itself speaking out about the problems humanity may face, from AI being weaponised, to humans becoming dependent on it.


The "XZ Utils Backdoor" Close Call For Linux:

NEW: Steven Vaughan-Nichols: This Backdoor Almost Infected Linux Everywhere: The XZ Utils Close-Call. (ZDNet, April 5, 2024)
For the first time, an open-source maintainer put malware into a key Linux utility. We're still not sure who or why - but here's what you can do about it.
NEW: Who Is The Mysterious "Jia Tan", Who Installed A Backdoor In The Compression Tool "XZ Utils"? (Gigazine, April 4, 2024)
This article, originally posted in Japanese on April 4, 2024, may contain some machine-translated parts.
[Detailed info, as the hunt continues. If you find articles identifying "Jia Tan" before I do, please share!]
NEW: Amrita Khalid: How One Volunteer Stopped A Backdoor From Exposing Linux Systems Worldwide. An Off-The-Clock Microsoft Worker Prevented Malicious Code From Spreading Into Widely-Used Versions Of Linux Via A Compression Format Called "XZ Utils". (The Verge, April 2, 2024)
Linux, the most-widely-used open-source operating system in the world, narrowly escaped a massive cyber-attack over Easter weekend, all thanks to one volunteer.
The backdoor had been inserted into a recent release of a Linux compression format called XZ Utils, a tool that is little-known outside the Linux world but is used in nearly every Linux distribution to compress large files, making them easier to transfer. If it had spread more widely, an untold number of systems could have been left compromised for years.
And as Ars Technica noted in its exhaustive recap, the culprit had been working on the project out in the open. The vulnerability, inserted into Linux's remote log-in, only exposed itself to a single key, so that it could hide from scans of public computers.
The story of the XZ backdoor's discovery starts in the early morning of March 29th, as San Francisco-based Microsoft developer Andres Freund posted on Mastodon and sent an email to OpenWall's security mailing list with the heading: "Backdoor in upstream xz/liblzma leading to ssh server compromise". Freund, who volunteers as a "maintainer" for PostgreSQL, a Linux-based database, noticed a few strange things over the past few weeks while running tests. After some sleuthing, Freund eventually discovered what was wrong. "The upstream xz repository and the xz tarballs have been backdoored", noted Freund in his email. The malicious code was in versions ​​5.6.0 and 5.6.1 of the xz tools and libraries.
Freund later identified the person who submitted the malicious code as one of the two main xz Utils developers, known as JiaT75, or Jia Tan. JiaT75 was a familiar name: they'd worked side-by-side with the original developer of .xz file format, Lasse Collin, for a while. As programmer Russ Cox noted in his timeline, JiaT75 started by sending apparently legitimate patches to the XZ mailing list in October of 2021.
Other arms of the scheme unfolded a few months later, as two other identities, Jigar Kumar and Dennis Ens, began emailing complaints to Collin about bugs and the project's slow development. However, as noted in reports by Evan Boehs and others, "Kumar" and "Ens" were never seen outside the XZ community, leading investigators to believe both are fakes that existed only to help Jia Tan get into position to deliver the back-doored code. The emails from "Kumar" and "Ens" continued until Tan was added as a maintainer later that year, able to make alterations, and attempt to get the back-doored package into Linux distributions with more authority.
The xz backdoor incident and its aftermath are an example of both the beauty of open-source, and a striking vulnerability in the Internet's infrastructure. Details of who is behind "JiaT75", how they executed their plan, and the extent of the damage are being unearthed by an army of developers and cyber-security professionals, both on social media and online forums. But that happens without direct financial support from many of the companies and organizations who benefit from being able to use secure software.


NEW: Still Worth It? HP EliteBook 840 G3 14-Inch Laptop Computer (4-min. YouTube video; TechInsomnia, December 22, 2022)
Is the 2016 HP EliteBook 840 G3 business-grade laptop still good in 2023? We will find out by performing tests, including Internet speed, video-streaming capability and playback on YouTube, sound, boot time, sleep time and a few more.
[MMS uses and sells more-powerful configurations of this excellent, now-inexpensive older laptop. Mine is a very-affordable and high-quality HP EliteBook 840 G4 with an Intel Core i7 CPU, 32GB RAM, 3TB of fast storage, 14"-diagonal display with extra-sharp 2560x1440 resolution and, of course, free, open-source software (FOSS): Linux Mint and a wide variety of favorite apps.]

NEW: Nick from Brittany: I'm Leaving Firefox, And This Is The Browser I Picked. (18-min. video; The Linux Experiment, November 8, 2021)
Time Index:
00:00 Intro
00:26 Sponsor - Linode
01:38 Why U No Love Firefox?
03:42 What I want in a web browser
05:03 Firefox Forks
06:35 Epiphany - GNOME Web
07:58 Google Chrome
08:47 Chromium
10:02 Brave
11:21 Vivaldi
12:46 Opera
13:26 Microsoft Edge
14:22 The FINAL CHOICE + full benchmarks
Let's start with why I want to switch from Firefox to something else.
[A good first look at many alternative browsers, with many good Comments. What to try in Linux? FOSS, I should think. Maybe Brave? Epiphany? LibreWolf?]
NEW: Chris Freeland: Announcing A National Emergency Library To Provide Digitized Books To Students And The Public. (Internet Archive, March 24, 2020)
To address our unprecedented global and immediate need for access to reading and research materials, as of today, March 24, 2020, the Internet Archive will suspend wait-lists for the 1.4-million (and growing) books in our lending library by creating a National Emergency Library - https://blog.archive.org/national-emergency-library/ - to serve the nation's displaced learners. This suspension will run through June 30, 2020, or the end of the U.S. national emergency, whichever is later, so people who cannot physically access their local libraries - because of COVID closures or self-quarantine - can continue to read and thrive during this time of crisis, keeping themselves and others safe.
This library brings together the books - all from Phillips Academy Andover and Marygrove College, and much of Trent University's collections, along with over a million other books donated from other libraries - to readers worldwide that are locked out of their libraries.
This is a response to the scores of inquiries from educators about the capacity of our lending system and the scale needed to meet classroom demands because of the COVID closures. Working with librarians in the Boston area, led by Tom Blake of Boston Public Library, who gathered course reserves and reading lists from college and school libraries, we determined which of those books the Internet Archive had already digitized. Through that work, we quickly realized that our lending library wasn't going to scale to meet the needs of a global community of displaced learners. To make a real difference for the nation and the world, we would have to take a bigger step.
"The library system, because of our national emergency, is coming to aid those that are forced to learn at home", said Brewster Kahle, Digital Librarian of the Internet Archive. "This was our dream for the original Internet coming to life: the Library at everyone's fingertips."
Franco Ordoñez: "China Wants Your Personal Information", Trump's National Security Adviser Warns. (4-min. listen; NPR/All Things Considered, December 10, 2019)
President Trump's new national security adviser is warning of an information-security doomsday scenario, for U.S. allies that allow Chinese telecommunications-company Huawei to build their next-generation 5G networks.
Robert O'Brien said countries that allow Huawei in, could give China's communist government backdoor access to their citizens' most-sensitive data. "So every medical record, every social-media post, every email, every financial transaction, and every citizen of the country with cloud computing and artificial intelligence can be sucked up out of Huawei into massive servers in China", O'Brien told NPR in an interview. "This isn't a theoretical threat", O'Brien said before speaking at the Reagan National Defense Forum, an annual gathering of
defense-industry and military officials.
Alfred Ng: Congress Warns Tech Companies: "Take Action On Encryption, Or We Will." (CNET, December 10, 2019)
U.S. lawmakers are poised to "impose our will", if tech companies don't weaken encryption so police can access data.
Tech companies and privacy advocates have long-supported encryption, noting that the privacy and security technology protects people from hackers, crooks and authoritarian governments. Law-enforcement officials, however, argue that encryption blocks criminal investigations by preventing access to suspects' devices and to their communications on messaging apps.
This debate took center stage in 2016 when Apple fought an FBI order to help unlock a terrorist's iPhone, arguing that providing a master-key to decrypt devices would endanger all iPhone users.
At a Senate Judiciary Committee hearing today, Apple's manager of user privacy, Erik Neuenschwander, reiterated that point for lawmakers. "At this time, we've been unable to identify any way to create a backdoor that would work only for the good guys", Neuenschwander told senators. "In fact, our experience is the opposite. When we have weaknesses in our system, they are exploited by nefarious entities."
NEW: Robert Walton and David Bowman: As Concern Grows Over Bitcoin's Energy Use, What's Next For Crypto-Currency? Crypto-Currency Mining Is Driving Up Power Prices, In Small Towns With Cheap Electricity. Can Bitcoin Be Made More Efficient, Or Is The Future Of Crypto In Another Currency?
(Utility Dive, March 28, 2018)
David Bowman started mining Bitcoin in his apartment in 2014 - as he puts it, "jerry-rigging around and busting the circuits here and there". He has since expanded, though he balks at calling it a "commercial" operation.
Compared to his neighbors, his energy use is paltry. Most anywhere else, his power demand for mining would have gone unnoticed. But Bowman runs Plattsburgh BTC, and the small city in upstate New York has improbably become the front line in a growing energy debate.
Here's the issue: Bitcoin is a crypto-currency begun in 2009, that leverages block-chain technology in order to operate without a central clearing authority. Transactions are confirmed by "miners" who are compensated for the computing power necessary to verify transactions in a few ways. But the primary incentive is being awarded new bitcoins and as the value of a bitcoin has risen, so has interest in mining the currency.
Currently valued around $8,000, the price has brought together investors spending $Millions to develop faster and more-efficient mining operations.
"But a byproduct of that is gigantic centralized mining operations", Bowman said. "And it's also created rampant energy consumption. It has become an arms race, and it definitely was not intended to be that way."
NEW: Alfred Ng: FBI Asked Apple To Unlock iPhone Before Trying All Its Options. (CNET, March 27, 2018)
The FBI made more of an effort to get Apple to unlock a terrorist's iPhone, than it did trying to open the device on its own, according to a Justice Department report. The DOJ's Office of the Inspector General noted in its report (PDF) that the FBI's Cryptologic and Electronics Analysis Unit (CEAU), which cracks mobile devices, didn't start looking at outside methods to open the iPhone until just before Feb. 16, 2016, the day the FBI sent a court order to Apple, demanding help.
In testimony to Congress, then-FBI Director James Comey said the bureau had no other option than to ask Apple for help cracking the iPhone 5C of a terrorist who killed 14 people in a 2015 mass shooting in San Bernardino, California. However, an FBI department chief knew a vendor was "almost 90% finished" with a solution for breaking into the locked iPhone before reaching out to Apple, according to the report.
The FBI's request culminated in an intense standoff between Apple and the bureau, which tried ordering the tech company to build a backdoor that would've allowed the government to unlock the iPhone. The case set up a legal battle between security and privacy.
"The FBI's leadership went straight to the nuclear option - attempting to force Apple to circumvent its encryption - before attempting to see if their in-house hackers or trusted outside-suppliers had the technical capability to break in to the San Bernardino terrorist's iPhone", said Sen. Ron Wyden, a Democrat from Oregon. "It's clear now that the FBI was far-more interested in using this horrific terrorist attack to establish a powerful legal precedent, than they were in promptly gaining access to the terrorist's phone."
[The FBI said it "asked for help", "reached out", etc. NO; it sent a court order DEMANDING that Apple SECRETLY make ITS OWN phone products LESS-SECURE - like other companies that HAD secretly caved in to that pressure.]

Return to main section of Money Is Not Wealth.