
It’s a big year for Adobe—Photoshop is turning 35! Born in 1990 as a niche tool for editing scanned images, Photoshop went on to become a household name—and to fundamentally transform how people create and edit images.
Photoshop began grabbing the world’s attention when it provided behind-the-scenes power for some of the amazing visual effects in Terminator 2: Judgment Day back in 1991. It’s been in the limelight ever since, even taking home a technical Oscar in 2018 for its impact on digital filmmaking and visual effects. And now it’s blazing the trail for generative AI in image editing.
So how has Photoshop stayed on the cutting edge for three-and-a-half decades? One answer is Adobe Research. “Great science and clever engineering can transform the quality and speed of existing features in products like Photoshop. As Adobe Research, we are happy to help with that,” says Gavin Miller, Head of Adobe Research. “For us, the gold star is when a new idea rises to the level of ‘Adobe Magic’—a feature doing something you thought was impossible, and that you then want to use every day.”
Over the years, Adobe Research has consistently helped push Photoshop forward. “For nearly every version of Photoshop since I joined, Adobe Research has contributed to a release-defining feature,” says Scott Cohen, Senior Principal Scientist and 26-year Adobe Research veteran whose work has been helping to shape Photoshop since 1999. The next set of Photoshop breakthroughs from Adobe Research that will premiere at Adobe MAX 2025 coming up in October.
For researchers, the key to developing so many exciting new Photoshop features has always been understanding how creative people work and then playing the long game by tracking emerging research discoveries and cultivating them into new features. Along the way, there’s a lot of experimenting, testing, and innovating among seasoned Adobe Researchers and in partnership with the research interns who arrive with brilliant new ideas each summer. And, of course, there’s deep collaboration with the Photoshop product team to get everything just right.
“It can take several years to go from proof-of-concept through large-scale data collection, model training, scaling up, and then the final tech transfer stage where there are a lot of iterations to make sure the efficiency, memory footprint, and quality all meet the product team’s requirements,” explains Jianming Zhang, a Senior Principal Scientist at Adobe whose work has included creating AI tools for Photoshop.
The work of Cohen, Zhang, and many of their colleagues is at the heart of some of the fundamental and groundbreaking Photoshop features—including the ones that allow users to easily and precisely take on their most critical tasks, from removing and inserting objects to creating and editing just about anything they can imagine.
Revolutionizing object selection
One of the most vital features for users, when it comes to editing digital images, is the ability to select a precise area to edit or remove. “When you want to do local editing, you need to be able to say, I only want to affect these pixels—whether they’re an object or part of an object,” explains Cohen. “So users need very high-quality masking—the ability to capture something as complex as the softness of a person’s hair for editing or to change the background behind it.”
In the early days, Cohen and his Adobe Research colleagues approached the problem by handcrafting complex algorithms that made it possible to select an object in Photoshop—such as a person or a car—to edit, move, or remove.
“The Quick Selection tool that Adobe Research developed years ago is so popular that a lot of people still use it today. It allows you to just use your mouse to brush the object you need, and the feature will magically expand to the boundary. It uses pixel-level similarity to find regions that are similar, along with traditional edge detection methods. It’s so useful and it’s very interactive, very real time, and very performant,” explains Zhang.
The team had been continuously improving selection tools over the years, and when Zhang joined Adobe Research as an intern in 2014, his mission was to push the technology even further. “We wanted to make the feature more robust than before—because sometimes it can be very challenging to find similar pixels in different lighting and textures. So we used deep learning methods. The solution we developed is object-aware, so you can just select the object no matter what lighting, or what kind of texture it has,” says Zhang.
Brian Price, a Principal Research Scientist with years of experience in selection technology, also played a big role in developing new deep learning-based selection tools inside Photoshop.
“With matting, the step that captures the smallest details of an object in order to make selection possible, we needed to work on the sub-pixel level,” explains Price. “For example, along the edges of things you can have pixels that are partly one object and partly the object behind, and it’s incredibly important to get that right so you can capture fine details, such as the spokes on a bicycle wheel or the strings on a tennis racket.”
So Price collaborated with an Adobe Research intern to hand-craft training data—using tools he had created for earlier iterations of Photoshop—and then trained a neural network for a more precise, easier to control selection tool.
In fact, refining selection tools has become an Adobe Research tradition—one that’s kept Photoshop’s masking and selection capabilities on the cutting edge. As Zhang explains, “My manager and my manager’s manager had all worked on traditional masking and selection approaches. They passed it to me to reinvent the feature with deep learning, and then my former intern, Zijun Wei, helped develop the newest version of Select Subject —a state of the art version that gets all the details, down to individual hairs. Now Zijun has joined the Photoshop team full-time, working with Jason Kuen, a Senior Research Scientist and leader on selection, to continue taking it to the next level.”
Filling in the gaps
Once users select and remove an object from an image, they face a new challenge—filling the space that’s left behind in a natural, realistic way. For years, Adobe Researchers have been on the forefront with this capability, too.
In 2010, Photoshop introduced Content-Aware fill, an Adobe Research innovation that allowed users to easily conceal or replace unwanted objects. The feature quickly grabbed users’ attention—and the video that announced it became one of the first viral videos on YouTube. “The big innovation in Content-Aware Fill was PatchMatch technology, which can intelligently fill holes by borrowing pixels from other parts of the image,” says Nathan Carr, Adobe Research Fellow and VP. PatchMatch was first unveiled in a widely-recognized paper at SIGGRAPH, a top computer graphics conference.
As deep learning and AI techniques advanced, Researchers continued to improve Photoshop’s ability to fill in gaps with more semantic understanding of the contents. “Our most recent success was Generative Fill, which applies generative algorithms to fill a hole using knowledge from the whole world of photos—not just content from your own photo. It’s a really exciting example of the evolution of our technology over time,” Carr explains.
“With Generative Fill, we’re using diffusion models to handle very challenging cases and the result is far more photorealistic than before,” adds Zhang. “In addition to removing objects, it can also do all kinds of hole filling, so if you want to expand the photo frame or if you want to replace an object, not just remove it, you can. For example, you can remove someone’s glasses or change them to sunglasses. It can do so many amazing things beyond the traditional algorithm.”
As with object selection, the latest innovations in Generative Fill were years in the making. “Our previous versions laid solid groundwork so that we could achieve Generative Fill. It comes from a continuous effort inside Adobe—including the collaboration between Adobe Research and the Photoshop team—to keep pushing the quality of the technology,” says Zhang.
Seeing around corners—and keeping Adobe products ahead
Beyond the tools for selecting objects and filling in gaps, Adobe Research innovations have powered a huge number of beloved Photoshop features over the years. Ever since its founding, back when it was known as the Advanced Technology Group, Adobe Research has focused on two big things: staying at the forefront of groundbreaking research trends and figuring out how those new ideas can help shape Adobe products.
“The promise of creating magical new features inspires and requires us to track the state of the art and, beyond that, to invent new technologies that combine the best of industrial tool-building and academic knowledge. We constantly reframe how we think about the task, from editing pixels, to editing regions, objects, semantics, and styles, to creating experiences. The visual feast of the real world and the rich imagination of our users brings the tools we invent to life, and gives a sense of purpose to our work,” says Miller.
“We’ve been able to help keep Photoshop, and other Adobe products, ahead all of these years because we’re out in the world looking for new ideas,” adds Cohen. “As soon as we see promise, we jump on it, explore it, and help reveal what it can do.”
To keep advancing the technology behind Photoshop, Adobe Researchers engage deeply with their research communities, publishing novel research, sharing ideas and inspiration with their academic counterparts at conferences and through academic partnerships, and bringing in talented interns every summer (who sometimes join the team after their studies) to keep new ideas flowing. Researchers’ contributions to Photoshop are also made possible by close collaboration with long-time product partner Sarah Kong and her Applied Research Team.
And the Adobe Research team is already thinking about innovations that will shape Photoshop over the next few years, even as technology is changing faster than ever. “We see more powerful models coming up every day,” says Zhang. “There will be a new code base, or a new framework, or a new paradigm. So our challenge is to absorb all of this new information very quickly, always thinking, ‘What’s the next big thing? What important features we can help develop for Photoshop or other Adobe products?’ The things we’re working on now will be transformative.”
Carr adds, “I don’t know what the next crazy algorithm will be even six months from now, but I have confidence in the talent inside Research to look around the corner and deliver the next big breakthrough for Adobe.”
Wondering what else is happening inside Adobe Research? Check out our latest news.