NORMAN-AI
WireFRAME
Design systems
Prototype






Project Overview
Norman AI approached me with a challenge:
They had a mobile-only technical platform for sharing AI models, but it suffered from severe UX issues, no product manager, no user research, and a small development team.
Project Overview
Norman AI approached me with a challenge:
They had a mobile-only technical platform for sharing AI models, but it suffered from severe UX issues, no product manager, no user research, and a small development team.
My Role
Led user research, product decisions, UX/UI design, and visual language creation.
Worked closely with the CEO and CTO to align user needs with business goals.
Ensured the design would scale to both desktop and later mobile + web refresh.
My Role
Led user research, product decisions, UX/UI design, and visual language creation.
Worked closely with the CEO and CTO to align user needs with business goals.
Ensured the design would scale to both desktop and later mobile + web refresh.
The Challenge
From the initial product review and competitor research, three main challenges stood out:
No clear hierarchy or onboarding in the existing mobile product.
High technical barrier — even internal developers struggled to upload and run models.
Low discoverability of new models.
The Challenge
From the initial product review and competitor research, three main
challenges stood out:
No clear hierarchy or onboarding in the existing mobile product.
High technical barrier — even internal developers struggled to upload and run models.
Low discoverability of new models.
Research & Findings
In my first week, I conducted a product audit of Norman’s existing mobile app and a competitive analysis of leading platforms including Baseten Flo, Replicate, and HuggingFace.
Complex workflows
both our app and competitors required technical expertise, making them inaccessible for non-technical users.
No onboarding or guidance
first-time users were left without clear steps to upload or run a model.
Opaque processes
users couldn’t anticipate what would happen next in the flow.
The screenshots below show the original Norman mobile experience before redesign. highlighting the complex workflows, lack of guidance, and unclear processes identified during the audit.


Research & Findings
In my first week, I conducted a product audit of Norman’s existing mobile app and a competitive analysis of leading platforms including Baseten Flo, Replicate, and HuggingFace.
Complex workflows
both our app and competitors required technical expertise, making them inaccessible for non-technical users.
No onboarding or guidance
first-time users were left without clear steps to upload or run a model.
Opaque processes
users couldn’t anticipate what would happen next in the flow.
The screenshots below show the original Norman mobile experience before redesign. highlighting the complex workflows, lack of guidance, and unclear processes identified during the audit.


Process & Solutions



Simplifying Model
Setup
Problem: Uploading models was overly technical and intimidating for first-time users.
Solution: Introduced Smart Auto-Population to fill technical fields based on input type, paired with a live preview so users instantly see how changes affect the end experience.

Clear entry point with model type selection and logo upload to set the stage for customization.

Grouped technical settings into collapsible sections, reducing visual noise and guiding focus.

Choosing an input type auto-fills details, speeding up setup and making configuration intuitive.

Choosing an output type auto-fills details, speeding up setup and making configuration intuitive.

Final review screen consolidates all configurations with edit options before publishing.

Progress broken into clear, meaningful steps to keep users informed during model creation.

Clear entry point with model type selection and logo upload to set the stage for customization.

Grouped technical settings into collapsible sections, reducing visual noise and guiding focus.

Choosing an input type auto-fills details, speeding up setup and making configuration intuitive.

Choosing an output type auto-fills details, speeding up setup and making configuration intuitive.

Final review screen consolidates all configurations with edit options before publishing.

Progress broken into clear, meaningful steps to keep users informed during model creation.

Clear entry point with model type selection and logo upload to set the stage for customization.

Grouped technical settings into collapsible sections, reducing visual noise and guiding focus.

Choosing an input type auto-fills details, speeding up setup and making configuration intuitive.

Choosing an output type auto-fills details, speeding up setup and making configuration intuitive.

Final review screen consolidates all configurations with edit options before publishing.

Progress broken into clear, meaningful steps to keep users informed during model creation.

Clear entry point with model type selection and logo upload to set the stage for customization.

Grouped technical settings into collapsible sections, reducing visual noise and guiding focus.

Choosing an input type auto-fills details, speeding up setup and making configuration intuitive.

Choosing an output type auto-fills details, speeding up setup and making configuration intuitive.

Final review screen consolidates all configurations with edit options before publishing.

Progress broken into clear, meaningful steps to keep users informed during model creation.
Simplifying Model
Setup
Problem: Uploading models was overly technical and intimidating for first-time users.
Solution: Introduced Smart Auto-Population to fill technical fields based on input type, paired with a live preview so users instantly see how changes affect the end experience.

Progress broken into clear, meaningful steps to keep users informed during model creation.

Final review screen consolidates all configurations with edit options before publishing.

Choosing an output type auto-fills details, speeding up setup and making configuration intuitive.

Choosing an input type auto-fills details, speeding up setup and making configuration intuitive.

Grouped technical settings into collapsible sections, reducing visual noise and guiding focus.

Clear entry point with model type selection and logo upload to set the stage for customization.

Progress broken into clear, meaningful steps to keep users informed during model creation.

Final review screen consolidates all configurations with edit options before publishing.

Choosing an output type auto-fills details, speeding up setup and making configuration intuitive.

Choosing an input type auto-fills details, speeding up setup and making configuration intuitive.

Grouped technical settings into collapsible sections, reducing visual noise and guiding focus.

Clear entry point with model type selection and logo upload to set the stage for customization.

Progress broken into clear, meaningful steps to keep users informed during model creation.

Final review screen consolidates all configurations with edit options before publishing.

Choosing an output type auto-fills details, speeding up setup and making configuration intuitive.

Choosing an input type auto-fills details, speeding up setup and making configuration intuitive.

Grouped technical settings into collapsible sections, reducing visual noise and guiding focus.

Clear entry point with model type selection and logo upload to set the stage for customization.

Progress broken into clear, meaningful steps to keep users informed during model creation.

Final review screen consolidates all configurations with edit options before publishing.

Choosing an output type auto-fills details, speeding up setup and making configuration intuitive.

Choosing an input type auto-fills details, speeding up setup and making configuration intuitive.

Grouped technical settings into collapsible sections, reducing visual noise and guiding focus.

Clear entry point with model type selection and logo upload to set the stage for customization.
Creating a Flexible Multi-Modal Canvas
Challenge: Most model tools lock you into one output at a time (image, video) and you can’t mix models or run the same prompt across several options. The challenge was to give non technical users a way to explore different models in one place, without switching screens or learning a complex setup
Solution: I designed a clean canvas where you drop a prompt once and connect it to any model you want. Each model appears as its own step with a clear preview, and users can try multiple outputs side by side. The flow shows how everything connects, so the experience stays simple even when mixing image, text, audio, and video models.
Looking ahead: Save common flows as reusable templates. This will make it easier to build richer workflows without starting from scratch each time.

A clean screen where the user writes a prompt, connects a model, and runs the first output. simple and easy to follow.

One prompt fans out to several models, letting users compare outputs side by side without extra setup.

The user can replace a model without leaving the flow, with all model options shown inline so they stay oriented and keep working smoothly.

Users can open any result, browse the full set of variations, and take quick actions like saving or sharing.

A clean screen where the user writes a prompt, connects a model, and runs the first output. simple and easy to follow.

One prompt fans out to several models, letting users compare outputs side by side without extra setup.

The user can replace a model without leaving the flow, with all model options shown inline so they stay oriented and keep working smoothly.

Users can open any result, browse the full set of variations, and take quick actions like saving or sharing.

A clean screen where the user writes a prompt, connects a model, and runs the first output. simple and easy to follow.

One prompt fans out to several models, letting users compare outputs side by side without extra setup.

The user can replace a model without leaving the flow, with all model options shown inline so they stay oriented and keep working smoothly.

Users can open any result, browse the full set of variations, and take quick actions like saving or sharing.

A clean screen where the user writes a prompt, connects a model, and runs the first output. simple and easy to follow.

One prompt fans out to several models, letting users compare outputs side by side without extra setup.

The user can replace a model without leaving the flow, with all model options shown inline so they stay oriented and keep working smoothly.

Users can open any result, browse the full set of variations, and take quick actions like saving or sharing.
Creating a Flexible
Multi-Modal Canvas
Challenge: Most model tools lock you into one output at a time (image, video) and you can’t mix models or run the same prompt across several options. The challenge was to give non technical users a way to explore different models in one place, without switching screens or learning a complex setup
Solution: I designed a clean canvas where you drop a prompt once and connect it to any model you want. Each model appears as its own step with a clear preview, and users can try multiple outputs side by side. The flow shows how everything connects, so the experience stays simple even when mixing image, text, audio, and video models.
Looking ahead: Save common flows as reusable templates. This will make it easier to build richer workflows without starting from scratch each time.

A clean screen where the user writes a prompt, connects a model, and runs the first output. simple and easy to follow.

One prompt fans out to several models, letting users compare outputs side by side without extra setup.

The user can replace a model without leaving the flow, with all model options shown inline so they stay oriented and keep working smoothly.

Users can open any result, browse the full set of variations, and take quick actions like saving or sharing.

A clean screen where the user writes a prompt, connects a model, and runs the first output. simple and easy to follow.

One prompt fans out to several models, letting users compare outputs side by side without extra setup.

The user can replace a model without leaving the flow, with all model options shown inline so they stay oriented and keep working smoothly.

Users can open any result, browse the full set of variations, and take quick actions like saving or sharing.

A clean screen where the user writes a prompt, connects a model, and runs the first output. simple and easy to follow.

One prompt fans out to several models, letting users compare outputs side by side without extra setup.

The user can replace a model without leaving the flow, with all model options shown inline so they stay oriented and keep working smoothly.

Users can open any result, browse the full set of variations, and take quick actions like saving or sharing.

A clean screen where the user writes a prompt, connects a model, and runs the first output. simple and easy to follow.

One prompt fans out to several models, letting users compare outputs side by side without extra setup.

The user can replace a model without leaving the flow, with all model options shown inline so they stay oriented and keep working smoothly.

Users can open any result, browse the full set of variations, and take quick actions like saving or sharing.
Improving Model Discovery
Challenge: Finding new models felt slow and lacked visual engagement.
Looking ahead: Built an Explore flow with instant visual examples, personalized recommendations, and one-click try, reducing friction from interest to first run.


Improving Model Discovery
Challenge: Finding new models felt slow and lacked visual engagement.
Looking ahead: Built an Explore flow with instant visual examples, personalized recommendations, and one-click try, reducing friction from interest to first run.


Outcomes & Impact
Outcomes & Impact
Presenting the redesigned desktop experience to investors during development proved to be a turning point. The new direction increased investor confidence and directly supported Norman in securing additional funding.
Beyond funding, the redesign also shaped the company’s long-term product vision: the new design language and UX principles are now being extended to both the mobile app and an entirely new website.
Reflection
This project proved I can own product and design end to end - aligning user needs with business goals.
Looking ahead, I envision custom invoke canvases per model type, making Norman even more flexible and future-ready.
Outcomes & Impact
Outcomes & Impact
Presenting the redesigned desktop experience to investors during development proved to be a turning point. The new direction increased investor confidence and directly supported Norman in securing additional funding.
Beyond funding, the redesign also shaped the company’s long-term product vision: the new design language and UX principles are now being extended to both the mobile app and an entirely new website.
Reflection
This project proved I can own product and design end to end - aligning user needs with business goals.
Looking ahead, I envision custom invoke canvases per model type, making Norman even more flexible and future-ready.
Explore next

Connect the right people, find anything that you need and automate the rest.
Connect the right people, find anything that you need and automate the rest.
Video Monetization Platform

Connect the right people, find anything that you need and automate the rest.
Connect the right people, find anything that you need and automate the rest.
Sweetch
Explore next

Connect the right people, find anything that you need and automate the rest.
Connect the right people, find anything that you need and automate the rest.
Video Monetization Platform

Connect the right people, find anything that you need and automate the rest.
Connect the right people, find anything that you need and automate the rest.