NexusGPT

Enhancing first-crack AI agent usability
with guided prompt design
Overview

1/7

Problem
3/7

NexusGPT is a platform that helps businesses build and deploy AI agents

Also known as digital assistants built to handle repetitive, non-creative administrative tasks like sending weekly outreach emails, summarizing meetings, or compiling performance reports.

Timeline

2 months (Dec '24 - Jan '25)

My role

Product designer

Team

1 PD, 1 PM, 1 DEV

Status

Under-development

Background

2/7

Problem
3/7

Building agents in Nexus is simple. Or so it seemed.

Building an agent in Nexus is as simple as writing a prompt that outlines its roles and responsibilities. However, what initially seemed straightforward turned out to be much more complex in practice.

Problem
3/7
Ideal

Write a prompt. Done. That's the promise.

Reality

Users struggle to prompt. Agents underperform.

Problem
3/7

Most users turned to support (the dev team) for help writing their prompts — but due to limited dev resources this was not sustainable

To overcome prompting struggles, users relied on support (the devs) to create effective prompts, but this wasn't sustainable—users should have the tools to do it themselves.

Problem
3/7
Problem
3/7

Since most users sucked at prompting, a prompt form was built to hold their hands

We gave users a 3-step form to describe their agent and its responsibilities. Behind the scenes, Nexus generated the prompt. Simple idea: avoid manual prompting altogether.

Problem
3/7
Problem
3/7
Problem

3/7

Problem
3/7

The form made things much worse, adding friction without solving the core problem

Over 90% of users reported that the agents were undeployable. Despite the form, they still reached out to the dev team to adjust their prompts—ultimately, the form only added more friction.

Problem
3/7
Problem
3/7

All this added friction increased the burden on the dev team and left users engaging with a feature that failed to solve the core problem

Problem
3/7

>90%

created agents were undeployable

An overwhelming majority of agents created through the form were essentially useless.

80%

of most prompts needed edits

The generated prompts needed heavy edits, showing the form wasn’t doing its job.

6

hours lost weekly

The dev team manually fixed agent prompts through client calls, which drained limited dev resources.

Insights

4/7

Problem
3/7

After talking to devs and watching how they manually improved prompts, it became clear: our flow asked the wrong things — or didn’t ask enough.

Problem
3/7

Focused on “what,” not “how”

Agent creation = hiring. But our flow missed the basics: job, personality, training, onboarding.

Training came after the fact

Knowledge wasn't part of setup. Agents launched unprepared and underinformed.

Key fields were optional or too basic

Only name and job title were required. Purpose and capabilities? Skipped or vague.

No preview, no feedback

Users couldn’t see the prompt or agent evolve. No chance to course-correct early.

No real personality building

Just one “writing style” input. No deeper traits, tone, or values — agents felt flat.

Lack of immersion = lack of ownership

It felt like “filling out a form,” not “creating an agent.”

Problem
3/7

And on top of all that, the UI didn’t help — it was cramped, flat, and lacked any real visual hierarchy. Nothing about it felt like you were building something meaningful.

Core challenge

How might we help users create effective, on-brand agents on their first try — without needing to master prompt engineering?

Research

5/7

Problem
3/7

This wasn't a form problem. It was a framing problem.

To ground the idea, I studied how real teams onboard roles like the ones users gave their agents — support reps, consultants & sales associates.

FINDINGS

CONCLUSION

The original agent form only covered ~20% of a typical onboarding process.

It captured surface-level inputs — job title, responsibilities, writing style — but missed the deeper layers: tone, behaviour, tools, training, and context. Most agents weren’t usable on the first try.

We didn’t need a better form.

We needed a better frame — one that treats agent creation like onboarding a teammate.

Problem
3/7

Rebuilding the flow around the onboarding model

We modeled the flow after real-life hiring to ask better, more insightful questions — helping users define agents more clearly and end up with stronger, more usable first drafts.

Rebuilding the flow around the onboarding model

Problem
3/7
Designs

6/7

Problem
3/7

I turned previously optional identity fields into a required first step — helping users define who the agent is from the start, reducing ambiguity for both the user and the system.

This step mirrors how real teams start onboarding — by clarifying who the person is, what they do, and where they belong. It’s about grounding the agent in a role and context before anything else.

Problem
3/7
  • Info that was optional before is now front and center.
  • A live preview gives the agent a face — making the process feel more personal and real.
  • Establishing a solid identity nudges users to think of the agent as a clearly defined persona with intent — not just a text generator.

  • Lets users pick from leading LLMs depending on the task — whether it needs speed, reasoning, creativity, or long context.

Problem
3/7

Prompts were often vague about the agent’s role, so we introduced agent types to give structure — and dramatically improve prompt quality.

Just like a real role needs a job description, agents need purpose. Users also chose what type of work was to be done. This raised prompt quality immediately, because now Nexus knows what to expect.

Problem
3/7
Problem
3/7
  • Selection pushes users to be specific about intent
  • Choosing a type helps structure the prompt behind the scenes
Problem
3/7

To prevent hallucinations and silence, users can now add knowledge and fallback logic during setup.

That’s why I designed the Knowledge step — to define not just what the agent should know, but how to retrieve it and what to do when it doesn’t.

Problem
3/7
  • Add knowledge when it matters — skip when it doesn’t
  • Define how the agent handles gaps or uncertainty
  • Reduce hallucination by grounding what the agent knows
Problem
3/7

I reduced deployment friction by enabling users to equip agents with the right tools and skills during creation — so they’re functionally ready from creation

Previously, agents were created first, and only after they failed at certain tasks did users realize they needed to attach skills (like search, API access, Airtable, etc.). This made agents feel limited or broken by default — and it often wasn’t obvious what was missing until something didn’t work.

Problem
3/7
  • Reduce launch friction by ensuring the agent is functionally ready
  • Relevant skill suggestions remove the guesswork

Problem
3/7

To address agents lacking personality or brand alignment, I improved the personality step with better questions and a more comprehensive tone builder.

Users can upload brand copy → we analyze tone → generate a matching voice. Or just pick a preset. Either way, agents sound way more on-brand.

Problem
3/7
Problem
3/7

To emphasize intentionality and immersion, I designed a live preview that updates in real time as users build their agent

We made the flow full-screen to feel more immersive. Inputs on the left, live agent preview on the right — so users can watch their agent come to life in real time. The snappy transitions are pretty cool too.

Problem
3/7
Problem
3/7
Conclusion

7/7

Problem
3/7

Still in dev, but early feedback is strong

No hard data yet, but users were way more confident during walkthroughs. We’ll measure prompt quality pre/post-launch to see how much this actually helps.

Rebuilding the flow around the onboarding model

Problem
3/7

Final thought: users don’t need power. they need clarity

We thought we were simplifying prompting. But the real win came from designing an experience that felt natural — something people already knew how to do.

Rebuilding the flow around the onboarding model

Return home

Back to work