Adobe says it wants AI to amplify human creativity and intelligence

About a year ago, Adobe announced its Sensei AI platform. Unlike other companies, Adobe says that it has no interest in building a general artificial intelligence platform — instead, it wants to build a platform squarely focused on helping its customers be more creative. This week, at its Max conference, Adobe provided both more insight into what this means and showed off a number of prototypes for how it plans to integrate Sensei into its flagship tools.

“We are not building a general purpose AI platform like some others in the industry are — and it’s great that they are building it,” Adobe CTO Abhay Parasnis noted in a press conference after today’s keynote. “We have a very deep understanding of how creative professionals work in imagining, in photography, in video, in design and illustration. So we have taken decades worth of learning of those very specific domains — and that’s where a large part of this comes in. When one of the very best artists in Photoshop spends hours in creation, what are the other things they do and maybe more importantly, what are the things they don’t do? We are trying to harness that and marry that with the latest advances in deep learning so that the algorithms can actually become partners for that creative professional.”

That’s very much the core tenet of how Adobe plans to use its AI smarts going forward.

Practically, this is going to take lots of different forms that range from being able to search images that Sensei automatically tagged to being able to perform certain tasks with just your voice.

During today’s keynote, Adobe showed off a few of these future scenarios. Say you have hundreds of images from a portrait shoot for a movie poster. You have a great layout, but now you do need a photo where your subject looks to the right. As Adobe Lab’s David Nuescheler demonstrated, Sensei may one day be able to help you find exactly that photo because it has tagged all of your images with details like that. And to push this idea even further, Nuescheler demonstrated how Sensei could even sort your images based on where your subject is looking, sorted from left to right.

Nuescheler also demonstrated how a designer could go from a sketch that’s fed into Sensei for tagging, to automatically finding stock images that fit the topic of the sketch, to a full movie poster. That’s impressive in itself, but Sensei also keeps track of every design decision you make (Adobe calls this the Creative Graph) and then lets you go back in time and see how a different decision would have changed your final outcome (without touching the rest of your final product). As a sidenote, Nuescheler also showed how Sensei can automatically determine the background of an image and delete — something that got just as much applause from an audience that’s used to painstakingly selecting and masking parts of its images as any of the other AI tools the company showed today.

What Adobe stressed throughout the day is that its focus here is not on making machines creative — instead it’s on amplifying human creativity and intelligence. That message is very much in line with what Microsoft and others are also talking about, though Adobe obviously wants to focus solely on enabling its creative professionals.

Adobe is also very aware of the importance of getting this right. Parasnis called Sensei a “generational bet” and during today’s keynote he took pains to stress that he sees AI and machine learning as “the most disruptive paradigm shift of the next decade.”

In the creative realm, Adobe definitely has a lot going for it to make this a reality. AI, after all, only works when you have lots of data. And nobody has more data about how creatives work than Adobe.

It’s worth noting that, over time, Adobe also plans to open many of the features of its Sensei platform up to outside developers. Today, it took a first step in this direction by making Sensei’s ability to match fonts from images to fonts in its Typekit library available to third-party developers. Over time, we’ll surely see a lot more of this.

For now, though, the company seems to be more focused on bringing more of its AI smarts to its core services and applications, be that in the cloud or on the desktop.