Disclosure: Intel is a client of the author.

At Intel’s Innovation conference this past week, the company highlighted the next generation of Windows PCs, clearly anticipating Microsoft’s genAI Copilot tool, which can write documents for you, create presentations from comments, and automate much of what annoys everyone about Outlook. Intel offered up a number of interesting scenarios about this new class of hardware — due in December — that has the potential to transform work and entertainment.

Intel CEO Pat Gelsinger and Rich Uhlig, director of Intel Labs, had a lot to point to at the event.

While there might be issues with this in the US, given the sensitivity here about educational content, Rich Uhlig highlighted a project in China that uses Intel’s new AI technology. It’s an AI-driven touchscreen display that has a camera and can capture the interaction between teachers and students. As AI-based tools learn what works and what doesn’t, it can help coach educators on best practices and build up the capability to work autonomously to tutor or mentor kids. As it is here in the US, teachers are spread thinly in China, so students often don’t get the personal attention they need for the best education.

Automation could create AI-driven mentors that not only help children one-on-one but can also help teachers become better at their jobs and more effective. This could also be useful in occupational training and even post-hire training for new employees by reducing the load on workers who would rather be doing their job instead of training someone else to do it.

This could have broad applications for training that don’t exist today.

Gelsinger, who is hearing impaired, he presented new AI-driven hearing aid technology that goes beyond hearing aids and can adapt, based on conditions. For instance, if you are in a Zoom meeting, it would pull audio from Zoom and block out ambient noise. If someone approached, the user could block the Zoom audio and switch to local sound — all the while automatically transcribing the meeting so the user doesn’t fall behind. It would also do real-time translation, which goes beyond useful in actually understanding what people are saying when you don’t speak their language.

A future capability that already exists in a product coming to market involves optimizing the sounds in a noisy environment. I’d find this useful because I have a really tough time hearing someone in an acoustically challenged venue. I used to work in construction, and this would have been a godsend by preventing some avoidable injuries.

When I travel, I like to watch videos and listen to music. Gelsinger demonstrated how AI tools could create unique content tailored to each individual user. For instance, if you like Taylor Swift’s sound but are tired of her lyrics about ex-boyfriends, AI could create a song that sounds like her but with lyrics on some other topic. (He also showed how this could also be done with images and eventually videos.)

I also read when I travel, and not only does this suggest that generative AI could draft a book or story that better fit my mood, but it could also create background music consistent with the book I’m reading.

These are just some of the innovations to bubble up at Intel this year. Intel, for instance, also introduced AI tools that could choose clothing online that both fit and better match your color and style. The company is on a surprisingly rapid cadence for this new technology to roll out by the end of this year. Assuming it  stays on track and hits its projected milestones, PCs are going to get a lot smarter. By the end of the decade, they’ll be doing much of the work for us.

End note: As I was writing this, I came across an AI-created image of “Mrs. Spock.” There are some things AI will create that we’ll want to unsee. That, too, is the nature of progress.

IT World