
Welcome back to Neural Notes, a weekly column where I look at how AI is affecting Australia. In this edition: Amazon acquires AI wearable startup Bee, which is always listening and promises to bring agentic AI to your wrist.
It’s an interesting move. AI wearables haven’t exactly had a stellar track record so far. And considering Amazon has its own history of surveillance missteps, this acquisition raises serious questions.
Bee sells a US$49 wearable wrist device that passively records and transcribes conversations. The pitch is that it’s building a searchable memory of your life.
It offers users daily summaries and personalised suggestions through its app. It can also access emails, contacts, calendar events, photos and location data if granted permission.
The terms of the acquisition have not been disclosed, but an Amazon spokesperson has said it wants to give users “even greater control” over their devices, and reiterated that Amazon “[has] been strong stewards of customer data since our founding and have never been in the business of selling our customers’ personal information to others”.
But Amazon’s track record around customer data makes that promise feel a little less comforting.
Amazon’s privacy history with Alexa and Ring is hard to ignore
Amazon also said it “cares deeply” about customer privacy. But its history tells a more complicated story.
As far back as 2018, Alexa devices were caught recording private conversations without the wake word. In one widely reported case, it also mistakenly sent a recording to a random contact.
In 2019, it was revealed Amazon had been sending thousands of voice recordings to human contractors to improve Alexa’s accuracy. This is something many users weren’t clearly informed about. It’s worth noting both Apple and Google were found to be engaging in this practice as well.
Related Article Block Placeholder
Article ID: 320122
In March 2025, Amazon removed the option for Alexa users to opt out of voice recording storage, meaning all voice interactions now go to the cloud by default.
Ring doorbells, acquired by Amazon in 2018, have also drawn scrutiny. In 2022, Amazon admitted Ring had shared user footage with police without consent at least 11 times.
In 2023, the company paid a US$5.8 million settlement to the US Federal Trade Commission over privacy violations and inadequate data protections.
Last week, Ring quietly reintroduced police video request features through its Neighbors app, sparking further concerns about surveillance creep.
Even Amazon’s now-defunct Halo health band, launched in 2020, raised eyebrows for tracking body fat percentage and analysing emotional tone based on your voice.
Bee promises memory and agentic AI, but can it deliver?
Bee brands itself as ‘agentic’ AI designed to act in your interest, not the platform’s. But with passive recordings and data-rich transcripts, the line between utility and surveillance quickly blurs. Sure, privacy controls exist today, but we’ve already seen so many instances over the years of this failing the customer.
Beyond this, Bee itself isn’t delivering the best accuracy so far.
At US$49, Bee sells itself as a memory assistant that passively captures conversations, generates summaries and delivers reminders.
But early testers found that it struggled with accuracy. The Verge reported that it often mistook TV shows, TikTok, and music for real conversations and subsequently hallucinated.
This also resulted in flawed or incomplete summaries, meaning it’s perhaps not quite up to replacing your memory right now.
Humane and Limitless show how risky AI devices can be
To be fair, the teething issues Bee is having aren’t happening in a vacuum. AI hardware startups have had a hard slog of it. Three of the most hyped AI-first gadgets — Humane’s Pin, Limitless, and the Rabbit R1 — have all struggled to deliver on their promises.
Humane’s AI Pin launched in April 2024 at US$699 with a US$24/month subscription attached. It promised smartphone-like functionality without a screen.
But it was plagued by poor reviews, urgent returns, safety recalls, and tech failures such as overheating chargers to clumsy clips, and unreliable voice responses.
Between May and August, returns outpaced sales: of the roughly 10,000 units sold, only 7,000 were actually kept. By February 2025, Humane shut down the AI Pin and sold off its IP to HP for US$116 million. It also bricked user devices, leaving many without refunds. That being said, some dedicated hobbyists even hacked the disabled devices back to life.
Related Article Block Placeholder
Article ID: 286320
Limitless positioned itself as a workplace assistant. Its US$499–$599 pendant records meetings, transcribes them to the cloud, and generates summaries and reminders. It boasts a 100‑hour battery life and encrypted processing. But like Bee, it faces questions about privacy, consent, and whether passive data capture will ever be welcome in professional settings.
Then there’s the Rabbit R1 — a compact, AI-powered device with a scroll wheel interface and ambitions to replace your smartphone for tasks like booking rides, generating playlists, or summarising your day.
It sold out its first batch within 24 hours of launching in early 2024. But reviews erred on the side of negative. Users criticised its laggy performance, limited functionality, and confusing interface.
Rabbit later clarified the R1 wasn’t actually running a true “large action model” as initially implied, but was mostly relaying commands to existing apps and services via APIs.
Backing from Amazon won’t solve Bee’s trust problem
Compared to these predecessors, Bee does have a few things on its side — a comparatively cheaper price point, and now, backing from one of the largest companies in the world.
Still, the market for AI wearables is becoming increasingly crowded, especially if you take into account existing hardware giants injecting more AI into their smartwatches and fitness trackers.
Amazon has the resources to weather early stumbles, but Bee still faces the same technical, ethical and market risks that have tripped up its peers.
If Amazon wants Bee to succeed where others have failed, it will need to not only prove the tech works, but that users can actually trust it.