Case Study: Building a Research Assistant Agent with Interacly

Case Study: Building a Research Assistant Agent with Interacly

By Interacly Team 12 min read

Doing research can be time-consuming. You sift through search results, read articles, extract key points, summarize, and cite sources. What if an AI agent could handle the heavy lifting?

This case study walks through how you might design and build a custom Research Assistant Agent using the principles and tools available on a platform like Interacly. We’ll focus on the workflow design, not specific code implementations.

The Goal: Automated Research & Summary

Our objective is to create an agent workflow that takes a research topic as input and produces a concise, well-sourced summary as output.

Input: A research question or topic (e.g., “What are the latest advancements in solid-state battery technology?”) Output: A 300-word summary including key findings and links to the top 3-5 sources.

Designing the Workflow: An Orchestrated Approach

A single agent might struggle with this multi-step task. It needs to search, filter, read, synthesize, and format. This screams orchestration – breaking the task down for specialized agents.

Here’s our planned workflow:

  1. Agent 1: Search Query Generation: Refines the user’s input topic into effective search engine queries.
  2. Agent 2: Web Researcher: Executes the search queries using a web search tool and retrieves relevant URLs.
  3. Agent 3: Content Scraper/Extractor: Visits the top URLs and extracts the main text content from each page.
  4. Agent 4: Relevance Filter: Reads the extracted content and filters out irrelevant or low-quality sources based on the original topic.
  5. Agent 5: Summarizer: Takes the relevant content chunks and synthesizes them into a coherent summary.
  6. Agent 6: Formatter: Formats the summary and lists the source URLs.

Placeholder: Flowchart diagram showing the 6 agents connected sequentially

Building the Agents (Conceptual Steps)

Using a visual platform like Interacly, here’s how you’d configure each agent:

Agent 1: Search Query Generator

  • Goal/Prompt: “Given a research topic: {{input.topic}}, generate 3 distinct, effective search query strings suitable for Google Search.”
  • Tools: None needed (just LLM reasoning).
  • Output: A list of 3 search strings.

Agent 2: Web Researcher

  • Goal/Prompt: “For each search query in the list {{input.queries}}, use the ‘Web Search Tool’ to find the top 3 relevant URLs. Compile a unique list of all URLs found.”
  • Tools: Web Search Tool.
  • Input: Output from Agent 1.
  • Output: A list of unique URLs (e.g., 5-9 URLs).

Agent 3: Content Scraper/Extractor

  • Goal/Prompt: “For each URL in the list {{input.urls}}, use the ‘Web Scraper Tool’ to fetch the main textual content of the page. Handle potential errors gracefully (e.g., if a page fails to load).”
  • Tools: Web Scraper Tool.
  • Input: Output from Agent 2.
  • Output: A list of objects, each containing a URL and its extracted text (or an error message).

Agent 4: Relevance Filter

  • Goal/Prompt: “Review the extracted text for each source in {{input.scraped_data}}. Based on the original topic {{workflow.initial_topic}}, determine if the source is highly relevant. Keep only the text from the top 3-5 most relevant sources.”
  • Tools: None needed (LLM reasoning for relevance judgment).
  • Input: Output from Agent 3 (and access to the initial topic).
  • Output: A list containing only the text from the top relevant sources.

Agent 5: Summarizer

  • Goal/Prompt: “Synthesize the provided relevant text chunks {{input.relevant_texts}} into a single, coherent 300-word summary focusing on the key findings related to the original topic {{workflow.initial_topic}}. Do not include information not present in the provided texts.”
  • Tools: None needed (LLM reasoning for summarization).
  • Input: Output from Agent 4 (and access to the initial topic).
  • Output: The 300-word summary text.

Agent 6: Formatter

  • Goal/Prompt: “Format the final output. Present the summary {{input.summary}} first. Then, list the URLs of the relevant sources used {{Agent4.output_source_urls}} under a ‘Sources:’ heading.”
  • Tools: None needed (LLM for basic formatting).
  • Input: Output from Agent 5 (summary) and the list of relevant URLs kept by Agent 4.
  • Output: The final formatted text.

“Breaking down complex cognitive tasks like research into specialized agent roles mirrors how effective human teams operate. Orchestration platforms make this feasible for AI.” - Placeholder Quote: AI Workflow Expert

Key Interacly Features Used

  • Visual Orchestration Canvas: To design and connect the agent sequence.
  • Tool Integration: Connecting agents to Web Search and Web Scraping tools.
  • Prompt Templating: Using variables like {{input.topic}} to pass data between agents.
  • Agent Specialization: Configuring each agent with a specific, focused prompt and limited tools.
  • (Implicit) Workflow State: Accessing data from earlier steps (like the initial topic or URLs from Agent 4) in later steps.

Potential Improvements & Variations

  • RAG for Internal Docs: Replace the Web Researcher/Scraper with a Vector Search tool to query internal company documents stored in a vector database.
  • Error Handling: Add conditional logic. If the Scraper fails on too many URLs, route to an agent that notifies the user.
  • Confidence Scoring: Have the Relevance Filter agent assign a confidence score to each source.
  • Recursive Search: Allow the Researcher agent to generate follow-up queries based on initial results.

Conclusion

This case study demonstrates how a seemingly complex task like automated research can be broken down into manageable steps handled by specialized AI agents. Using an orchestration platform like Interacly allows you to design, build, and manage such workflows visually, combining the power of LLM reasoning with specific tools to achieve practical automation.

You can start simpler – maybe just a two-step workflow to search and summarize – and gradually add more specialized agents as needed. The power lies in the composability.


FAQ

Q1: Why use multiple agents instead of one big prompt for research?

A1: Breaking the task down improves reliability and performance. Specialized agents (search, extract, summarize) are better at their specific jobs. It also makes debugging easier if one step fails.

Q2: What does ‘orchestration’ mean in this context?

A2: Orchestration refers to defining the sequence and data flow between multiple specialized agents to accomplish a larger workflow. Agent A’s output becomes Agent B’s input, and so on.

Q3: What kind of ‘tools’ can these agents use?

A3: Tools can include web search APIs, web scrapers, calculators, database connectors, code interpreters, connections to vector stores for RAG, or even calls to other AI models or agent workflows.

Q4: How does Interacly make building this easier?

A4: Interacly provides a visual drag-and-drop canvas to design the workflow, connect agents, configure their prompts and tools, and manage the data flow, abstracting away much of the underlying orchestration code.

Q5: Could this workflow be adapted to research internal company documents?

A5: Yes. You would replace the Web Search/Scraper agents with an agent using a Vector Search tool connected to a database containing your indexed company documents (this is a common RAG pattern).