A couple of weeks ago, Google and Samsung announced a big Gemini development coming to their newest devices: task automation. Starting with food delivery and rideshare apps, Gemini would be able to use certain apps on your behalf in a virtual window to take care of things like ordering dinner or getting a car to the airport — all based on simple prompts. You know, all the stuff that we’ve been promised for years AI assistants will be able to do. That feature wasn’t live when I first started testing the S26 Ultra, but it just arrived in beta as part of an update. And boy is it weird watching your phone use itself!
The first prompt I gave it was pretty simple: order an Uber to the airport. Gemini asked for clarification to determine which airport (a good question to ask!), then it went through a couple of steps on its own: adding the destination and opting to skip the step where you specify your airline, which doesn’t really matter at my local airport since it’s all in one terminal. As promised, the system stopped before the final step and prompted me to review the details before putting in the request for a car.
A vague and slightly more complicated request to order a coffee and a croissant required a little more input from me — and a lot of time on Gemini’s part scrolling through Starbucks’ hot drink options — but sure enough, it found the flat white on the menu. It also confronted a crucial decision: order the chocolate croissant warmed, or straight out of the pastry case? Without my input, it specified (correctly) that the pastry should be warmed. Pretty impressive for an assistant that just a year ago would argue with me over the details of a flight on my calendar.
I’ve got much more testing to do with this automation feature and I plan to spend the next few days throwing it some curveballs. Still, it’s impressive to see this feature out in the wild working as intended — so far, at least.
Photography by Allison Johnson / The Verge
