
Systematizing Creativity: Building an Automated Content Repurposing Pipeline
A comprehensive guide to engineering a content repurposing workflow. Learn how to architect a Notion-to-JSON pipeline that automates drafting for Twitter, LinkedIn, and newsletters using intelligent agents.
The Problem: Context Switching Kills Flow
The hardest part of content creation isn't the idea; it's the logistics. You have a thought while debugging code or walking the dog. You jot it down. Later, you have to open that note, rewrite it for LinkedIn (professional, story-driven), rewrite it again for X/Twitter (punchy, thread-format), and maybe summarize it for a newsletter.
This is low-leverage work. It requires high executive function but offers zero creative return. As an automation engineer, if I have to do the same process three times manually, I consider the system broken.
This post details the architecture of a Content Repurposing Pipeline. We aren't just asking ChatGPT to "write a post." We are building a system that takes a raw kernel of an idea and processes it through a logic layer to produce a structured JSON payload ready for any scheduler API.
The Architecture: Input, Process, Output
We are treating content like code. It has a source (raw input), a build process (transformation), and a deployment target (social platforms).
Here is the stack:
- Database: Notion (The Headless CMS)
- Orchestrator: Make.com or n8n
- Intelligence: OpenAI GPT-4o or Claude 3.5 Sonnet (via API)
- Output: Structured JSON / Notion Log update
Phase 1: The Input (Notion Schema)
Your database needs to be more than a list of titles. It needs to hold the DNA of your content. I use a Notion database with specific properties that act as context injection for the AI agent.
Required Properties:
Input Type(Select: Quick Thought, Code Snippet, Rant, Case Study)Core Pillar(Select: Automation, AI Engineering, SaaS Building)Raw Content(Text: The messy, unstructured brain dump)Status(Status: Idea -> Generate Drafts -> Review -> Scheduled)
The trigger for our automation is a status change. When I move a card from "Idea" to "Generate Drafts," a webhook fires.
Phase 2: The Logic Layer (Prompt Engineering)
This is where most automations fail. If you send generic prompts, you get generic content. To build a tool that sounds like you, you need a multi-step prompt chain.
I don't ask the AI to write. I ask it to reformat based on constraints.
The System Prompt Strategy
We need to create a JSON object that contains drafts for multiple platforms. Here is the logic flow I feed the LLM:
- Analyze the Input: Identify the core value proposition and tone of the raw text.
- Platform constraints:
- Twitter/X: No hashtags, short sentences, "hook" first line, under 280 chars per tweet if threaded.
- LinkedIn: Professional hook, spacing for readability, clear CTA, "bro-etry" kept to a minimum.
- Output Format: Strictly JSON.
The Template
Here is the actual system prompt structure I use in the automation node:
You are a senior technical editor for an AI engineer.
Your goal is to take a raw technical thought and repurpose it for different distribution channels.
INPUT CONTEXT:
- Topic: {{Core_Pillar}}
- Type: {{Input_Type}}
- Raw Text: {{Raw_Content}}
INSTRUCTIONS:
1. Create a LinkedIn draft: Professional, authoritative, focuses on the "how-to" or the business logic.
2. Create a Twitter thread (array of strings): Punchy, hook-driven, removes fluff.
3. Create a Short Description: For metadata purposes.
OUTPUT FORMAT:
Return ONLY valid JSON with no markdown formatting.
{
"linkedin_draft": "string",
"twitter_thread": ["tweet 1", "tweet 2", "tweet 3"],
"hook_analysis": "Why this hook works"
}Engineer's Note: By forcing a JSON output from the LLM, we make the data programmatic. We can parse the `twitter_thread` array directly into separate database rows or scheduler slots later.
Phase 3: The Data Payload
Once the Webhook receives the response from the LLM, we have structured data. We don't want to auto-post this—AI hallucinates, and it lacks soul. We want to auto-draft.
The automation parses the JSON and updates the Notion page. I create a dedicated "Output Log" section inside the Notion page content block using the API, or map the text to specific properties.
The Scheduler-Ready JSON
If you are building a custom micro-SaaS or using a tool like Typefully/Taplio API, your automation should finalize the data into a payload that looks like this:
{
"source_id": "notion_page_123",
"timestamp": "2023-10-27T10:00:00Z",
"platforms": {
"linkedin": {
"content": "Building agents isn't about code, it's about architecture...",
"media_prompt": "isometric server diagram"
},
"twitter": {
"thread": [
"Building agents isn't about code. \n\nIt's about architecture. 🧵",
"1. Define the scope..."
]
}
},
"tags": ["AI", "Engineering"]
}This JSON object is the "Golden Artifact." It is platform-agnostic. From here, you can send it to Buffer, a custom Next.js dashboard, or back into a Notion "Review" database.
Why This Matters
This workflow separates the Creative from the Operative.
When I am in "Builder Mode," I dump raw data into the system. I don't worry about character limits or hashtags. I trust the pipeline.
When I switch to "Editor Mode," I open Notion, and 90% of the work is done. I tweak the tone, fix the technical nuance that the AI missed, and hit schedule.
Next Steps for Developers
If you want to implement this:
- Start Simple: Build the Notion-to-Notion pipeline first. Have the AI write the draft and paste it back into the page comments.
- Iterate Prompts: Your first few outputs will sound robotic. Tweak the system prompt by giving it examples of your previous best-performing posts (Few-Shot Prompting).
- Add Image Gen: Add a step that asks the LLM to write a DALL-E 3 prompt based on the content, generate the image, and attach it to the Notion page.
Automation isn't about replacing the creator. It's about building a exoskeleton for your creativity.
Comments
Loading comments...