As guests stood in front of a screen with a human face that flickered and then altered, a silent solemnity descended upon the gallery floor. It was not painted or sketched. It was conjured. Not conceived by a hand, but computed by an algorithm trained on thousands of facial pieces, emotions, and histories.

Toronto’s InterAccess space was replete with such glimmers. Exhibits weren’t hung on walls—they pulsed on them, projected and alive. One show, particularly evocative, included a slowly shifting face of Drake, constructed through a generative model aimed to examine fluid identity. Remarkably effective in stirring both recognition and estrangement, the piece encouraged a collective halt. Just movement, no labels.
| Category | Details |
|---|---|
| Exhibition Title | “Unruly Intelligences” and “ArtOfficial” |
| City | Toronto |
| Featured Artists | Sanaz Mazinani, collaborative AI-art creators |
| Artistic Mediums | AI-generated portraiture, interactive installations, AR technology |
| Notable Works | AI-assisted portrait of Drake, “An Impossible Perspective” |
| Central Themes | Identity, machine vision, cultural memory, digital surveillance |
| Technologies Involved | Generative AI, custom neural networks, augmented reality |
| Timeline | January–February 2026 |
Visitors walked between stations with soft-lit terminals and abstracted reflections. In certain installations, audio pieces were whispered. Others blended you with previous visitors by using sensors to repaint your face in real time. There was something disturbingly intimate about seeing yourself as processed through artificial eyesight.
Sanaz Mazinani’s work at the Stephen Bulger Gallery offers a more direct inquiry. Titled “An Impossible Perspective,” her images were generated using a custom-designed AI trained not only on visual patterns, but on her family archive and political iconography. What emerged was layered, occasionally shocking. From historical data, faces appeared like ghosts pressing through gauze.
These weren’t just aesthetic experiments—they conveyed deep intentionality. These works functioned as both art and commentary in the context of discussions about surveillance and digital memory. Mazinani’s method, which was very creative, required creating the AI itself, guaranteeing that the generative process remained creatively driven rather than outsourced.
At Arcadia Earth, immersive technology obliterated the distinction between environment and portrait. One installation transformed from a rainforest to a digital refugee, incorporating climate data into facial anatomy. By utilizing augmented reality, it created a space where interaction was important. Viewers were part of the portrait’s story, altering it merely by staying still.
Halfway through, I noticed how frequently these portraits appeared to be looking back. During the first week, artists and engineers mingled, offering insights into how each neural net was built, how human bias was managed—or sometimes left exposed. Unlike static paintings, these portraits evolved. Some adjusted based on air quality. Others responded to your breath. One even integrated heartbeat sensors.
The exhibit named “Unruly Intelligences,” organized by the John H. Daniels Faculty, pushed the notion further. Here, portraiture moved beyond faces. AI was trained to trace bodily movement through time, turning gesture into pattern. A dancer’s arc became a constellation on television. Exceptionally apparent in aim, the piece showed how motion itself may be a form of self-portrait.
Perfection was not the goal of these initiatives. They accepted fault, embraced malfunction. An incomplete loop and an incorrectly displayed face weren’t failures. They were revelations. The work has a startlingly human quality because of that looseness.
For younger creators, this technique has been particularly useful. Through workshops at InterAccess, students learnt not only to utilize AI technologies, but to alter them. The emphasis was on discussion, not automation. Code wasn’t a replacement—it was a brush with fresh bristles.
By partnering with computer technologies, these artists are rethinking authorship. The hand is still present, but now it is joined by a swarm of pattern detectors, predictive engines, and weighted data points. That relationship enables for artworks that are extraordinarily versatile—portraits that not only represent but react.
Critics have underlined the ethical difficulties of such activities. Data sourcing and consent are still contentious problems. Are you a dataset or a participant if the model was trained using your image? The answer is frequently left open-ended for early-stage displays like these, but the tension strengthens the creative aim.
The change from canvas to computation isn’t forsaking tradition. It’s building on it. These artists aren’t stepping away from storytelling—they’re inventing new grammars for it. The narrative structure has been altered, with a responding dialogue taking the place of a single frame.
By incorporating machine learning and sensory input, several exhibits even offered therapeutic qualities. One replicated the viewer’s mood using facial sentiment analysis and adjusted color palettes accordingly. Unexpectedly heartfelt and subtly reassuring.
Toronto has emerged as a secret leader in AI art over the last few years. Not by spectacle, but through prolonged inquiry. The city’s galleries have made room for nuance—spaces where culture and code coexist.
These portraits do not shout. They whisper, murmur, shimmer. They invite thinking, not simply appreciation. In that, they recall the primary function of portraiture: to hold up a mirror and ask, softly, “Is this you?”
Sometimes the answer is yes, but in ways that are yet unclear to us.
