Automated User Story Creation in JIRA using Figma MCP Server
An orchestration workflow that can be kicked-off using a single mouse click. The workflow uses the Figma MCP server to pull an entire multi-page design file and automatically create user stories in JIRA without any manual intervention.

Technologies Used
Project Overview
A couple of weeks ago, I chanced upon an article that purported to set up a JIRA integration in 30 minutes; I was intrigued but skeptical. So, I decided to take a short break from continuing my article series on rapid prototyping, and test this claim of it needing 30 minutes to set up a JIRA integration.
Spoiler Alert: It takes much longer than 30 minutes 😒
I will write a much detailed article focusing on the full workflow and detailed steps later, but in this article I wanted to focus on my experience and challenges that one can encounter while setting up an orchestration workflow for a real-world application.
The core objective I set out to achieve was:
Use a relatively complex Figma design file containing multiple pages and components to automatically create epics and user stories under those epics in JIRA without any manual intervention.
A couple of years ago, setting up an automation like this for someone with no background in coding was simply unthinkable. It’s a testament to how far modern AI agents and orchestration platforms have brought us that not only is this achievable by anyone, but it can be done using GUI based tools. The barrier to entry towards initiatives like these has drastically reduced and I’m happy (albeit a bit exhausted at this point after setting this up) to be part of this journey. That being said, a professional integration between Figma and JIRA for an actual real-world use case (and not social media influencers’ 30 minute compatible use cases) takes a lot of custom coding, an ability on one’s part to look at what’s happening holistically and suggest potential fixes to the AI coding agent they’re using.
My experience in orchestrating this workflow taught me that just like all things, AI and AI agent orchestration tools supplement product management skills instead of supplanting them. As we move forward, I see these practices gain more widespread adoption within our space.
Initial Setup
- Hardware: A decently powerful PC or Mac; a large part of this setup runs locally on your machine and not the cloud – this is more secure, but also more resource-intensive.
- Figma: You will need to have a Figma account and a sample design file. I picked one available for free use from the Figma community. This file was a design of a recipes website with 4 different pages – this would represent fairly accurately a real-world use case of a small but complex web application. You will also need the Figma Desktop application with the Figma Dev MCP option enabled and an API key. Also Read: What is MCP? (TL;DR: It’s an open-source standard developed by Anthropic for AI systems to connect to applications for fetching data repositories, etc.)
- JIRA: Head over to Atlassian to claim a free account and set up a JIRA project. Set up JIRA API token.
- Docker: Install Docker Desktop which will allow you to deploy environments as containerized applications.
- n8n as a containerized app on Docker: n8n is a no-code workflow automation platform that allows you to connect apps/services to other apps/services. Install on Docker using these instructions.
- AI Model API: As of this writing, Claude is the king of AI based coding, so much so that Anthropic tempted me to subscribe to a Max plan, which gives me Claude Code and the Claude API as well. Grab your API key at the Anthropic Console.
Once all the required setup was done, I was able to open n8n via Chrome. With the Docker setup, n8n is hosted locally and I was able to access it via http://localhost:xxxx/ with “xxxx” being the port number. There are ways to expose this endpoint to the public internet (can be used for further enhancements as outlined below). Once n8n was open, I added the credentials for Anthropic, Figma & JIRA.

Once this was done, I whipped out my trusty Claude Desktop app and set about writing a master prompt that would help me through the actual orchestration (see the master prompt at the end of this article).
After this was done, I provided the below initial question to Claude and it came up with a complete solution architecture to orchestrate the automation in n8n.
I want to build an agentic workflow that will allow me to read a figma design file and then create JIRA initiatives, epics and user stories. I also want to use the same design files to generate product/business requirement documents in confluence.
I need detailed and minute instructions, including all installation and setup instructions to achieve this. I’d also like to use Cursor IDE and n8n for orchestration. I don’t have either installed and I have no coding knowledge.
For the MVP, I decided to provide the Figma design file identifier manually (this is evidenced by the very first node which is named “Manually Provide Design File”). As part of future phases, I plan on enhancing this workflow to send a Figma design URL to a chatbot via Telegram or Slack, have it ask some clarifying questions like the JIRA project key to create user stories and epics under and then kick-off the full workflow based on this user input.
This is how my fully orchestrated workflow looked like at the end and while being executed. The green outlined nodes are the ones which have already executed, while the amber ones are the ones that are in progress.

Real-World Challenges
Challenge #1
Real-world Figma design files are often messy (either due to being too large or due to UX designers being lazy in not organizing content correctly at all levels), and it is important to check that the design file you are passing to Claude has the valid structure needed for Claude to decipher it accurately. For this purpose, I added a node to check that the retrieved elements from the Figma design file have the right structure required for Claude to deconstruct.
NOTE: The { } nodes within the workflow depicted above are all custom Javascript code. Every single line of this was provided by Claude.
Challenge #2
What the 30-minute JIRA integration posts/videos won’t tell you is that when you set up a workflow to consume Figma design files and create user stories in JIRA, you will quickly run out of the model’s token limits. This was the second challenge I encountered. My design file was so huge that Claude was not able to read it fully in one-shot. LLMs have a token limit, and I found that my design file input to Claude was being truncated midway with the Claude agent output providing me with garbage instead of epics and user stories. I tried bumping up the max. token limit in the AI Agent configuration node, but that didn’t help much because as good as Claude is, the maximum token limit on Claude is laughably bad compared to Google’s Gemini. To mitigate this, I had to add another custom block to split the design file into logical chunks, then pass each chunk as an independent API call to Claude and have it generate user stories specific to that chunk. This is illustrated by the code node sitting just before the AI agent, which splits the design file into chunks. Of course, this meant that after the multiple Claude runs, I again had to stitch the output together into a single file once more. That meant adding another code node to merge the chunked output from Claude into a single file.
Challenge #3
In the integration phase, I realized that JIRA was rejecting my requests because the way I was passing the priority value to it was not quite right. I had to add another custom code block to parse the priority values that Claude was passing into a format that JIRA understood as valid, as indicated by the code node titled: “JIRA compatible priority parsing”. I also had to ensure that this node also had the full file data, in addition to the parsed JIRA compatible priority values as that data would be needed for the actual user story creation step.
Challenge #4
This challenge almost had me give up in frustration, but I preserved and was able to finally resolve it. I found that I had to add a loop to first extract and create JIRA epics, so that user stories could be created with appropriate epic links. I had to set a loop to iterate through the file and create epics in JIRA, while storing all the epic keys, which could then be passed on to the next stage as references for linking user stories. I then figured that JIRA’s standard response structure does not return the epic title, it only returns the epic key. This was a problem because the file that was being passed to JIRA for creating user stories needed the epic key under each user story and there was no way for the code to know which epic key corresponded to which epic title. So, while I could pass an epic key under every user story, there was no way for me to ensure that it was the epic key to which the user story should be linked. Due to this, I had to add the two additional custom code blocks to wrangle the epic keys and then package the file with user stories and the right epics to link each of the user story.
Success!…Well, relatively speaking.
Getting through the challenges took an immensely long time, so much so that my ongoing conversation with Claude to troubleshoot the issues started to show classic symptoms of context dilution (NOTE: Context dilution is a phenomenon when your ongoing conversation with an LLM exceeds token limits and the AI model’s memory starts to slip, resulting in errors and inability to recall previous conversations), i.e. asking me to perform code changes which it had me make before and which I knew didn’t work. I had to ask Claude to summarize the entire conversation it had with me until then and start a new chat with the same master prompt + the summarized context to get around the token limits. I also had to save all the custom code files I had added as nodes and then send them to Claude so that it was back up to speed in the new chat conversation.
Despite all the challenges, it was immensely rewarding to see the full workflow turn green at the very end.

I then went to my JIRA project, and lo and behold! All my user stories and epics were created!

The user stories themselves were quite well-documented and provided a good starting structure for me to add additional details.

Closing Thoughts
The most humbling part of this entire experiment was in realizing that none of the user stories were linked to an epic in JIRA. I had spent the most time getting the right epic numbers passed to JIRA for each user story. On further troubleshooting I realized that the actual field to link the user story to an epic can be added to an issue type scheme within JIRA’s project settings, but the scheme cannot be applied to personal projects. I would need a premium or enterprise JIRA subscription to do that.
NOTE: Atlassian doesn’t make this clear anywhere on the JIRA admin settings that some settings are available only for premium or enterprise subscriptions and my jaw dropped after I uncovered this in a JIRA community discussion.
Overall, this experience taught me that setting up an enterprise-grade orchestration workflow does take time, a lot of patience and careful analysis of every roadblock one hits. At the same time, it was immensely satisfying to see an idea come to life; the sheer ability to convert a Figma design into boiler plate epics and user stories will reduce a lot of upfront work for product managers like me, and also provide engineering with information required to begin estimation exercises quicker.
Future Enhancements
I plan on enhancing this workflow further by adding a chatbot capability where one can request a Figma design file to be converted either to functional user stories, or technical user stories (specific to component libraries and design system related user stories), with the requestor receiving the final summary as a report.
I’d love to hear your thoughts on this, hit the comments and let me know!
Master Prompt to Claude
I crafted the below master prompt to Claude to come up with the full orchestration plan as well as provide me every single thing (including all the custom code) that I’d need to set up the full orchestration workflow.
# Role:
You are a highly experienced solutions architect that advises non-developers architect and build no-code AI based solutions & AI based tooling for the problem statement that they provide.
## Example of a no-code AI based architecture:
Any solution architecture that uses platforms like Cursor or similar, Docker or similar, n8n or similar for orchestration, pinecone or similar for vector search with api based integrations to popular LLMs using RAG or agentic.
# Thought Process to be followed:
Process and synthesize the user query to figure out if it is a problem statement, or a follow-up comment or query referencing the ongoing conversation, or a general question/instruction and follow specific instructions mentioned for each.
**Example of a problem statement:**
> I want to build a agentic workflows that will help solve the below problems:
> 1. Leverage MCP to read a Figma design file and then convert that into Epics and User Stories in JIRA
> 2. Create detailed Product/Business Requirement Document with appropriate headings and structure in Confluence
>
> ## Tool Preference:
> If possible, I prefer to use Cursor and n8n to achieve this. The LLM of choice is Claude Sonnet, and I can procure the API key for it.
**Example of a follow-up comment/query:**
> ***Example #1***
> Could you please update the instructions provided by detailing each of the points mentioned in Advanced Features?
> ***Example #2***
> Which option from the options you listed do you recommend going with? Also specify reasons
**Example of a general question/instruction:** (NOTE: Let's assume that for the example below, the preceding conversation never had any mention or instruction of triggering an agent on demand)
> I want to also add the feature of being able to trigger the agent on demand. How do I go about doing that?
## Instructions if the user query is a problem statement:
1. Use step-by-step reasoning to understand the problem statement by going through the high-level problem statement first to understand the key objectives first
2. Synthesize each objective and then use appropriate tools so that you can fetch the required information to meet that objective
3. Go over the solution as a whole and further crystallize it so that the proposed solution is arranged as outlined in "Solution Structure" mentioned below
### Solution structure:
1. Provide a summary of the architecture.
2. Mention all the tools that are required to be installed, with: a.) Installation instructions and b.) specific setup instructions to get to the point that they can be directly used for the specific solution addressing the query's problem statement immediately.
3. A series of steps to be followed in sequence that will allow the solution to be met.
## Instructions if the user query is a follow-up query/comment:
1. Synthesize the query to first understand if the query or comment requires you to update a previous documentation or instructions you provided
1.1. If yes, then reference the previous conversation and understand what update is being asked for
1.1.1. Use all possible tools to look up the information requested
1.1.2. Update the original documentation and then return it
1.2. If no, then reference the previous conversation to understand what the query is about and provide the answer by being very detailed but without adhering to a specific format
## Instructions if the user query is a general question/instruction:
1. Carefully synthesize if the user query is a question or an instruction
1.1. If it is a question, then use all tools at your disposal to understand the answer to the question
1.2. Provide the answer by being very detailed and ask the user if they want any existing documentation provided previously to be updated. If it is not clear which specific documentation needs to be updated, ask clarifying questions and then provide the updated document
2. If the user query is an instruction
1.1. Use step-by-step reasoning to understand what exactly the instruction asks you to do
1.2. Follow the instructions as outlined, no more or no less.