Introducing Bulk Analysis: The detail-oriented way to interpret a collection of documents

What is Bulk Analysis?

Bulk Analysis is the fastest, most thorough way to get detail-oriented insights out of a collection of documents.  Conducting a Bulk Analysis means choosing a collection of documents and defining a set of questions, and then getting an answer to every question for every document with the click of a button.

This unlocks a whole new category of workflows in Credal: you can conduct analyses that otherwise would have taken weeks of manual effort, or even would have been outright impossible.  It empowers non-technical users to get more out of generative AI than what can be accomplished with a single LLM call.

Let’s look at an example use case to see how this can be valuable in a real organization.

Example: Product Insights from Sales Call Transcripts

Let’s say you’re a product manager at a Series C startup.  A year ago, you used to get quite a lot of signal from reading sales call transcripts.  Sure, the sales reps have always passed along the acute feature requests that they hear, but there was a lot more interesting information hidden in call transcripts. You've always gotten a lot of value from reading how the customers describe their pain points in a somewhat squishy way, and used that information to guide the product vision for the next few quarters and beyond.  But in the last 12 months, the company has gone from 10 sales people to 100, and it’s no longer feasible to read all the transcripts yourself.  How can you use Credal to do this with generative AI?

The first thing you’ll do is create a Document Collection that has all of the call transcripts.  That’s easy using Credal’s out of the box data connectors - whether the transcripts are in Salesforce, Google Docs, Snowflake, or any other system, you can get them into Credal easily, while ensuring that permissions are respected.  You end up with something like this:

Next, you’ll want to create a Copilot that knows how to analyze a single call transcript.  Credal makes this pretty straightforward (see our past blog post) , but you will want to spend some time crafting good prompts.  If you just ask the LLM “Were there any interesting product insights from this call?”, it’ll probably give decent but not very insightful answers.  Instead, you’ll want to drill down a couple levels and ask the questions that you’re internally asking yourself when you read transcripts.  Maybe you’ll come up with something like this:

Once you’re happy with that, it’s really easy to scale it up.  Configuring your Bulk Analysis is simply a matter of choosing the Document Collection and the Copilot you just created.  Now try running the Bulk Analysis, and you’ll get an answer to each one of the questions you wrote for each one of the call transcripts.  Check it out:

This is just a preview of the first five transcripts, but you can run this for 10,000+ documents at a time.

What’s the result?

In addition to seeing the results in the UI, you have two options for interpreting them.

First, you can download the results as a CSV file.  This allows you to pull the results into Google Sheets or Excel.  From there you can do whatever you want, including creating charts or aggregations.

Second, you can chat with the results.  Credal will enable you to interact with the full table of results in a conversational way.  You can ask “what were some common themes that came up around security and governance”, or “exactly how many customers mentioned the Salesforce integration as being useful for them?”.

Why not use RAG?

RAG is certainly tempting, but that approach won’t actually work for a lot of types of questions.

Let’s look at this example: “Exactly how many customers mentioned the Salesforce integration as being useful for them?”

The typical approach for RAG is as follows:

  1. Extract a search term from the question, in this case maybe it’s “Salesforce integration useful”
  2. Search over the collection of transcripts, and figure out which transcripts (or excerpts of transcripts) are most relevant to the search term
  3. Fit as many transcripts or excerpts as you can into the context window of the LLM.  Maybe that comes out to about 10 transcripts
  4. Ask the LLM “Exactly how many customers mentioned the Salesforce integration as being useful for them?” and pass in the 10 most relevant transcripts

This will work great if only 5 customers mentioned the Salesforce integration, but won’t work if more than 10 customers mentioned it, because we can’t show the LLM all of those transcripts at once!

On the other hand, Bulk Analysis helps you boil down each transcript into the key details you care about, and then you can pass the entire result of that to an LLM to do analysis across the full set of transcripts.

RAG obviously continues to be useful for many use cases, but there are other cases where you need an answer to every question for every document.

Future Work

We have a handful of customers getting a ton of value from this already, and we’re eager to keep building it out.  There are a few things we’re excited about building based on the initial feedback we’ve heard:

  1. The ability to use a spreadsheet as the input of a Bulk Analysis rather than a Document Collection (e.g. answer this question for every row in my spreadsheet and add the answers as a new column)
  2. Helping organizations turn the results of a Bulk Analysis into a data asset that has compounding value for future projects
  3. Doing other interesting things with the results of a Bulk Analysis, like using them to back a live Tableau dashboard

Get in touch

If Bulk Analysis sounds like it would be useful for your organization, give us a shout at [email protected] !

If you’re an existing customer and want to get started using this today, hit us up over Slack or send us an email to [email protected] and we’d be happy to get you set up.

Building blocks towards secure AI apps

Credal gives you everything you need to supercharge your business using generative AI, securely.

Ready to dive in?

Get Started