Samsung held its annual Unpacked event on January 17, 2024, to launch its new Galaxy S24 smartphone. The company rebranded the experience (i.e., event) as “Galaxy AI” — a nod to putting more AI, including generative AI (genAI) capabilities into smartphones. The program kicked off with announcements around experiences — not the hardware on the new device (e.g., cameras, screen) — which is not a standard practice for Samsung. The 2024 Unpacked event may have been one of my favorites of all time. I’ll share why.
The theme for the day focused on using AI to put the user at the center of the smartphone experience with a focus on connection, creation, and collaboration. For me, the big announcement of the day was the ability to run genAI on a smartphone and protect the user’s data, photos, notes, and more. Please keep in mind that many if not most so-called “genAI applications” use more than genAI to run — a host of other related AI or ML technologies are in the mix. Samsung didn’t reveal those details. Here’s what I was able to demo that excited me:
- Language translation in near real time. Through texting or talking (speech-to-text), the Galaxy S24 will translate what one person is saying into one of 13 languages, including dialects (e.g., English as it sounds and is spoken in either India, the United Kingdom, or the United States). I had a simple conversation with a staff member who spoke Mexican Spanish while I spoke US English. It was not instantaneous, but it was fast enough to be effective. The only mistake was one time saying “Jewish” rather than “Julie” for my name.
- Support of multimodal inputs to search for information, questions, or shopping. Samsung partnered with Google to launch a new service called “Circle to Search.” Users can draw a circle with their finger or stylus pen to select an object in a photo and request more information. While there was a limited gallery of photos to use, I was impressed by the speed and search results. Some will argue that using an image as a search input is not new. Cathy Edwards, a VP of engineering at Google, spoke at the event and mentioned that consumers use Google Lens 12 billion times each month.
- Leveraging genAI to allow consumers to edit, alter, or essentially create new images. The photo editing app allows users to modify photos. Samsung adds a watermark to the altered or newly created images. Imagine you take a picture of your friend flying over a ski jump at a resort. Your friend might have gotten five inches off the ground. While your friend may have felt like they were flying, the photo evidence is less impressive. The user can fix that. They can select the skier in the image and move them in the photo … let’s say another 1–2 meters off the slope. The application will fill in around the moved skier to make the photo look authentic. The app also offers some pre-baked options to improve photos, such as removing shadows or sharpening lines. (Total aside: The smartphone will also turn videos filmed at full speed into slow motion.)
- Summarizing group texts while doing the same for notes — plus formatting them. Imagine you are in a meeting while your sales team is planning an outing for a drink after work. Rather than going back and reading through all the text messages to catch up, you can ask the service to summarize the conversation for you. The phone/service will do the same (create summaries) for notes. It will also format the notes and offer other insights. These features are also available for recorded voice memos.
At the finale, Samsung gave us a quick preview of what is to come next, including what looks to be an Oura-style ring. It was so fast that I had to ask the woman sitting next to me if I had imagined it.
Want to learn more for your company’s journey? Please schedule a guidance session or inquiry with me.