• Voyager
  • Posts
  • The Ultimate Race to Win Consumer AI — Wearables, Hardware & Beyond Apps

The Ultimate Race to Win Consumer AI — Wearables, Hardware & Beyond Apps

A look into what companies are actually building to win the consumer wearable AI space.

Hello from the beyond! This is Voyager, the only designers-aboard spaceflight navigating you through the AI-UX universe.

We treat AI news like a speed dating round – fast, fun, and to the point.

A LOT has happened. I started writing this at the start of the month and then Humane launched…

It’s a juicy piece today, and just before biting into it, in 10 secs, here’s what I’m writing about today —

  • 🥩 The race to win Consumer Wearable AI but with a twist

    • What they’re actually brewing — The Game-changer move, with all its AI-UX interactions

  • 🔮 What’s the latest AI-UX trend?

  • 🤤 What’s the future promise? “Screenless UX”

  • 🤣 Meme of the day

There are 4 AI innovation trends that are taking place under our noses —

  • Dynamic Interfaces

  • Ephemeral Interfaces

  • aiOS → wrote about it last time

  • Screenless UX → today’s post

I found an interesting piece which was posted at the same time I hit the publish button for the last month’s issue on aiOS — and it’s on point.

aiOS is the future. I still believe it. But specifically, it’s the future for consumer AI. With the launch of GPT3, all giant products were being atomized and infused with AI. Now, the pendulum is swinging back to creating one app with everything. I mean Adobe just teased about their AI integration into Premiere Pro and it’s Sora, Runway and Pika all rolled into one. The compression is coming.

The main vision with AI infusion is to bring everything to you instead of you trying to find an app to do a certain task. aiOS is the default place because at the end of the day, everything can easily be compressed into the Operating System (OS).

The fight to build the OS is underway.

If you’re building the OS, you need your own hardware. AI wearables are a core part of the race. Whoever wins the AI wearable war, and wins the consumer’s trust for a long time.

The Race to Win Consumer Wearable AI —

The ULTIMATE deep dive into AI wearables, your daily wear swag. It’s a gold rush. People are running to build THE wearable, and there can only be a few winners - given that there’s only so many things you can wear at a time.

As I was researching into all the wearables, I realized all of them are actually fundamentally different. Or maybe it’s just my-naive-self that succumbed to the company’s marketing.

We’re diving into —

  • Pin: Humane,

  • Pocket Device: Rabbit, 

  • Pendant: Tab, Rewind’s Limitless,

  • (New) iPhone: ReALM, [a secret trio product],

  • Glasses: Meta’s Ray Bans, Brilliant’s Frame,

Real Tony Stark stuff, each with a different mission and a different execution.

What’s actually brewing under…

This is Part 1: Humane, Rabbit, Tab. Part 2: next week.

1. Humane’s AI pin

Humane AI pin

Overview

  • It’s a hardware marvel. So compact with so many features/possibilities!

  • They’re taking Siri out for a walk—literally. They are building an AI assistant; more conversational, less screen & with more context about your day-to-day life.

  • They recently launched. It’s been a blood bath. The critics have been brutal. I, on the other hand, believe in Humane, in its potential. The execution though...umm is getting there.

    • Execution can catch up, innovation takes time.

The Game-changer move

  • They are building the operating system, CosmOS - an aiOS. The intention: replace a smartphone.

    • Being able to connect multiple LLMs and services.

    • Use on any other device, not limited to ai pin.

  • I think the underlying technology they are building (as initially promised) was the connection of APIs. One device which links all your apps - one place to take actions from using natural language.

  • However, it seems that got lost in the launch and reviews…?

AI-UX Interactions

  • Voice, Touch, Spatial gestures, Camera — truly a marvel.

  • One finger hold to send a voice command.

  • Double finger hold to enter interpreter mode (real-time translation).

  • Trackpad for touch gestures.

  • Projects monochrome images (i.e., “Laser Ink Display”) on surfaces such as the palm of your hand to display simple content such as time, text message, temperature, alerts, and more.

    • 7 years ago a device called Circet was launched with a projector to display on your arm. The major problem? The projected screen is as bright as your skin in light — meaning on a sunny day you don’t see anything.

    • If you just want to watch something fun - this review is 🤣 

  • You can interact with the projected screen with gestures

    • Towards you/Away from you is go out/into the UI screens.

    • Rotate your palm to “hover” over a UI button.

    • Pinch to “click” the hovered button.

    • Clench palm to go back.

  • Voice command starting with “Look..” to use the camera as input. Look that’s good but is it too many interactions to remember?

  • Voice output can be interrupted to a UI display by bringing your hand in front of the device.

  • A small notification light indicator.

“We’re all waiting for something new, something that goes beyond the information age that we have all been living with. At Humane, we’re building the devices and the platform for what we call the intelligence age.”

Imran, Co-founder at Humane

2. Rabbit’s R1

Rabbit R1 Interactions

Overview

  • Product launched (watch launch video)

  • What a beautifully designed piece of hardware — teenage engineering strikes again!

  • For now the R1 is chatGPT in an orange box.

  • It’s a toy but with learning capabilities (a coming soon feature).

  • It’s similar to a Tesla which learns about the roads and situations as humans drive it around. Rabbit learns about navigating interfaces and how you as a user uses it to automatically do it in the future.

The Game-changer move

  • LLMs are Large Language Models that understand natural language really well. That’s the norm, right? Everyone knows this.

  • Agents, the upcoming AI fad, can take actions on your behalf. But they don’t perform a task they aren’t designed for.

  • Enter into the scene - “LAMs” a.k.a Large Action Models, the unique selling point for R1.

  • LAMs not only understand you, but can also take action.

  • “Call me an Uber and message Lizzy about tomorrow’s ticket, oh and also add carrots to the grocery list”

    • *magic magic magic*

    • The deeds have been done before you can finish your sentence (in the demo-land universe of idealism)

  • It’s like an intuitive companion. It triggers actions across all devices and from any device.

  • It’s trained on your actions.

  • Cons: Currently limited to only 4 apps (Uber, Doordash, Midjourney, Spotify). For rest of the apps, the UX needs to be built out in the Rabbit R1.

Fun fact: They don’t use APIs to link to the other apps. You open their web portal which is like a virtual machine where you can open apps and the Rabbit interacts with that interface.

AI-UX Interactions

  • The camera to see your surroundings, at your will. The fun part is that is flips, so it acts as both the front and the back camera.

  • Push to Talk button.

  • Scroll wheel (just because? touch scrolling works too).

  • Hold button + scrolling to take increase/decrease actions.

  • Hold the device sideways to input via keyboard.

  • A web portal to login to other apps (which “records” your flow to teach the Rabbit how to use a specific app).

  • The response of the R1 after you’ve completed saying something is instant. Even though the output takes time, but the device acknowledges your input immediately (Humane doesn’t).

  • Cons: You interact with it like your phone. The friction to use your phone vs Rabbit R1 is the same.

What if you had to redesign the Rabbit R1? What would it be? Watch this funny video! 🤣 

3. Tab

Tab product

Overview

  • Product still in the making.

  • It’s a pendant which listens to everything happening around you. Only audio, no camera/visuals. Not a 100% privacy invasion, phew?

  • They initially positioned themselves as an AI assistant you can wear around your neck.

    • Look back on anything that has happened in your life.

    • Ask questions “what happened on that day, what did that mean?”

  • But not anymore… (a genius move?)

The Game-changer move

  • In a recent talk, the founder mentioned they are not trying to build an assistant anymore. It’s a race which Apple can win easily.

    • For starters, there’s no iMessage API, the assistant won’t have access to your messages. Apple 1, Assistants 0.

  • The focus is on building a data stream. A data stream of your life.

  • How I see it is like a context gathering machine that knows stuff happening around you and in your life.

  • Honestly, I recently realized how powerful voice is. I accidently left my screen recording ON on my Mac and the next morning was a lil creepy. I could hear myself working, my meetings, me eating, watching a show, and talking to my friends. And that’s just 10 hours of audio.

  • Another reason for mission switch: Humans don’t really look back. They either look forward with high hopes or are stuck in the past moaning over their 8th grade heartbreak.

    • Humans want short term context, not long term.

    • Which makes sense, I click so many photos but hardly visit them again unless it’s to post on Instagram or I’m feeling nostalgic.

  • I think it’s genius. Imagine being able to use this context stream to build apps on top of it? Getting books or movies recommendations based on what’s going on in your life? Pre-filtering your bumble profiles based on who matches your context stream?

Or I could be totally wrong and unaware because the founder was once quoted saying, “Tab is not an assistant, period. I’m not building something that’s going to connect to Notion or your emails any time soon. I’m solely building… a friend that morphs into your creative partner, life coach, [or] therapist as needed”

AI-UX Interactions

  • A light indicator when it’s working

  • Voice input to listen to everything.

  • (may not be the how the launched product functions though)

It’s not an ai assistant, it’s a tap on you at all times. Controversial enough to sell.

Someone on Twitter

Part 2 of ‘What’s actually brewing under...’ for rest of the products will be shared next week. Thank you for reading this one!

Adding a new section to each newsletter — instead of advertising about the release of new ai-ux interactions, I thought I’d make it more meaningful.

What’s the latest AI-UX trend I’m seeing?

Real-time text prompt to image (soon any form of input and output?).

Yes, we are here. No more slowww buffering to see the output.

  • Meta AI recently launched their chatbot and platform which features a real-time text to image output.

  • Decohere AI is another startup which has only 1 functionality - creating images fast.

  • Krea AI, with a team of 10, is just at another level!

Meta AI in action - creating images in realtime

Decohere AI in action, faster and much better than meta

Krea AI in action, the app is wayyy better than this GIF’s quality.

Give it a whirl. It’s fun and it’s fast.

Apparently, Zuck uses this feature to play a “imagination game” with his daughters before bed-time. Just a random fact I came across.

Make stuff real-time. That’s what people want.

With AI, we’re living in world of promises, that’s super clear.

What’s the future promise?

Up until now, the software apps and websites commoditized the operations layer — apps like Uber or Deliveroo made it easy for anyone to use the services without worrying about the on-the-ground logistics. This was the service layer. 

Now the landscape is changing again.

There’s a new layer, Interface layer a.k.a Screenless UX as I like to call it, which is predicted to commoditize the service layer.

We’re already seeing that happen — Rabbit R1 or Humane shows us visions of using only natural language to get everything at one place. They are bringing different services like booking a taxi, messaging your friend and placing a reservation to your favourite restaurant at one place. You interact with that one interface - which need not be a screen.

They’re still demo videos though and a couple many iterations away from us being able to try it out.

A real life example that comes to mind is when I visit Ikea. The furniture is placed such that you’re sold the dream home and not “pieces of wood”. You envision yourself living there and then you buy. A software version of this is Pinterest. Why would users browse Amazon when you can first envision and buy directly from there?

This is a near possibility. The future holds unimagined experiences.

The biggest question now is, who’s going to win the Screenless UX?

and secondly, how do you design for the screenless UX?

I’ve started reading how to design agentive technologies, and going to be sharing my research in the next newsletter piece. But if you’re already designing and thinking about agentive technologies, here’s a gentle reminder to not go over-board. The aim is still to try and solve the user problem!

I hope designers will be careful about not overusing this [natural language] kind of interface, as a graphic output can be much more understandable to users when reviewing a set of options than a string of text.

Christoper Noessel

A quote from Christopher Noessel’s book ‘Designing agentive technologies’

🤣 Meme App of the day

Marissa Mayer recently launched an app recently. Who she?

  • She was president and chief executive officer of Yahoo! from 2012 to 2017.

  • She was a long-time executive, usability leader and key spokesperson for Google.

The app is called Shine. It’s to help you share photos. Yes, share. Is this 2012..?

It looks like the team has no designers 😂 

Screenshots of the app as publicised on the app store

A twitter-head commented that the website looks like “an Indian wedding invitation from 2007” 🤣 

Screenshot of the website

Google Photos watch out?

- a designer in the aiverse a.k.a mr. easter eggs bugs bunny

P.S — Wait wait, time for quick feedback? What did you think of this piece? Reply to this email to let me know, do you think you learnt something new?

P.P.S. — As always, don’t forget to invite your other designer friends onboard the Voyager - we have a few empty window seats on this spaceship 😉

 

Join the conversation

or to participate.