Skip to main content

Adventures with Snowflake MCP and Semantic Views

Snowflake MCP and Claude

Last month, I had an opportunity to roll up my sleeves and start building analytics with Snowflake MCP and Snowflake Semantic Views. I wanted to see how far I could push real-world analyst and quality assurance scenarios with Tableau MCP and DataTools Pro MCP integration. The results gave me a glimpse of the future of AI/BI with real, production data. My objective was to deliver a correct, viable analysis that otherwise would have been delivered via Tableau.

The time spent on modeling my data, providing crystal clear semantics, and using data with 0 ambiguity helps. My results delivered great results, but I ended the lab with serious concerns over governance, trust, and quality assurance layers. This article highlights my findings and links to step-by-step tutorials.

Snowflake MCP and Claude

Connecting Claude, Snowflake MCP, and Semantic Views

The first step to connect all of the components was building my Snowflake Semantic views. Snowflake MCP gave me the framework to orchestrate queries and interactions, and using Snowflake Semantic Views gave me the lens to apply meaning. All of my work and experimentation occurred in Claude. This gave me the AI horsepower to analyze and summarize insights. To connect Snowflake to Claude, I used the official Snowflake MCP Server, which is installed on my desktop and configured in Claude.

Together, these tools created a working environment where I could ask questions, validate results, and build confidence in the answers I got back.


Creating Snowflake Semantic Views

With my Snowflake Semantic View setup, I spent some time researching and reading other folks’ experiences on semantic views. I highly recommend having a validated and tested Semantic view before embarking on AI labs. If you don’t know what metadata to enter into your Semantic View, seek additional advice from subject matter experts. AI can fill in blanks, but it shouldn’t be trusted to invent meaning without human oversight: Why AI-Generated Meta-Data in Snowflake Semantic Views Can Be Dangerous

Bottom line… Begin with a simple and concise Snowflake semantic model. Build clearly defined dimensions and measures. Use real-world aliases and refrain from using AI to fill in the blanks, unless your objective. Layer on complexity once you’re comfortable with the results.


What Worked Well

  • Control over data access
    Thankfully, the Snowflake MCP is limited to semantic views and Cortex search. The opportunity and value of Cortex search cannot be understated. I will cover that in another post. The idea of unleashing an AI agent with elevated permissions to write SQL on your entire data warehouse is a governance nightmare. Semantic Views gave me the ability to scope exactly what Claude could see and query.
  • Accuracy of results
    The top questions I get during AI labs: “Is this information correct?” I had a validated Tableau dashboard on my other monitor to validate the correctness of every answer.
  • Simple to complex questioning
    My recommendation with any LLM-powered tool is to start with high-level aggregate questions. Use these to build a shared understanding and confidence. Then, grounded on validated facts, you can drill down into more detailed questions with confidence. This approach kept me in control when the analysis moved beyond existing knowledge and available analysis.

Where I Got Stuck

Three challenges slowed me down:

  1. Metadata gaps – When the semantic layer lacked clarity, Claude produced ambiguous answers. It isn’t garbage in, garbage out problem…. It is me having a level of subject matter expertise that was not captured in my semantic layer or in a feedback loop to make the AI system smarter. LLM analysts feel less magical when you know the answers. That is where adding Tableau MCP allowed a pseudo peer review to occur.
  2. Over-scoping – When I got greedy and exposed too many columns, ambiguity crept in. AI responses became less focused and harder to trust. Narrower scope = better accuracy.
  3. Context Limits– I had Claude do a deep analysis dive. I also had it code a custom funnel dashboard that perfectly rendered a visual funnel with correct data. At some point, Claude explained that my context limit had been reached. My analysis hit a brick wall, and I had to start over. Claude is a general-purpose AI chatbot, but it was still disappointing to hit a stride and have to stop working.

Risks You Should Know

If you’re using AI to build your semantic layer, you need to be aware of the risks:

  • AI-generated semantics can distort meaning. It’s tempting to let an LLM fill in definitions, but without context, you’re embedding bad assumptions directly into your semantic layer: Why AI-Generated Meta-Data in Snowflake Semantic Views Can Be Dangerous
  • Do not give LLMs PII or Sensitive PII. As a rule of thumb, I do not add PII or sensitive PII into semantic models. I hope that at some point we can employ Snowflake aggregation rules or masking rules.
  • Governance blind spots. Connecting the Snowflake MCP requires access from your desktop. For governance, we use a personal access token for that specific Snowflake user’s account. That ensures all requests are auditable. Beyond a single user on a desktop, it’s unclear how to safely scale the MCP.
  • False confidence. Good syntax doesn’t equal good semantics. Always validate the answers against known results before you scale usage.

Final Take

Snowflake MCP and Semantic Views are still very much experimental features. They provide a glimpse of what will be possible when the barrier and access to governed, semantically correct data are removed.

In my case, I employed DataTools Pro for deeper metric glossary semantics and a writeback step via Zapier to capture learnings, re-directions, and insights for auditing purposes. If you would like assistance setting up a lab for testing, feel free to contact us to set up a complimentary session

Process hacking Gmail and Salesforce with Zapier And OpenAI

GMail to Salesforce

My desire to have a simple, streamlined GMail and Salesforce flow goes back 12 years. One of my guilty pleasures as a tech and data geek is automating repetitive, mundane tasks. However, as a business operator with limited hours in a day, there are two conditions I evaluate to determine if a problem is worth automating:

Impact of addressing the problem: If I want to make an impact through automation, I need to see a 3x return on time per month. In other words, if I invest 10 hours tinkering and automating something, it should generate 30 hours of time savings per month per person. This is a high barrier that prevents me from solving the wrong problems.

Pain the problem causes: If can’t get my desired return on time, I rate how much pain the problem causes? By “pain” I refer to mental or even emotional pains that you experience at work… Stress, frustration, cognitive overload, and context switching. It accumulates and manifests as friction between people and process.

When GMail and Salesforce friction reached a level 5 pain…

On a busy day with 20 plus email threads, I found myself fumbling around GMail, Excel and Salesforce rather than communicating and advancing my business. I decided to vent on Linked-In wondering if anyone else in my immediate network has the same pain point.

Business pain scale

The root of the problem

When I get an email from someone who does not exist in my CRM, I have not found a simple mechanism to get someone from email into Salesforce as a contact or lead. After 30 minutes of research I found that this problem may be solved with an Einstein tool.

To Salesforce credit, once someone exists as a lead or contact in my CRM, the native extension from Salesforce is extremely useful.

My Zapier and OpenAI solution explained

How does the GMail extension, Zapier and ChatGPT work together?

  1. Process the email signature and determine name, email, phone, geography, company, title.
  2. Enrich the data by classifying the title to role or group (seniority)
  3. Check to see if the person exists in the CRM comparing the email to contact
    • If the email exists as a contact, is it the same person (first name / last name) or “something close”. I put this in quotes because historically this was a human judgement call, but in 2025 it is an LLM judgement call for me.
  4. Check to see if the person exists in the CRM as a lead using the same process
  5. Insert the new record if it does not exist
  6. Append only missing data points if they are missing

Creating my first Chrome Extension with ChatGPT

With no working knowledge of how to build a chrome extension, I opened chat GPT and provided the following prompt. An hour of tinkering with 25 iterations back and forth with ChatGPT and the solution was complete.

Building my Zap for GMail and Salesforce

I have been working with Zapier for at least 10 years now. I am drawing from lots of experience automating my data flows in Salesforce. I put that experience to work creating a zap that achieved most of my solution design.

Zapier connects GMail to Salesforce

Letting the LLM do the heavy lifting

Zapier has a great interface to use ChatGPT to process data to produce a consistently structured output. If you have built wrapper apps with ChatGPT like I have, you know this was challenging in the early days. Now, I have ChatGTP process the email and with specific instructions, process and produce the output.

The Results

The result of my tinkering is complete removal of pain communicating in Gmail. I use my simple chrome extension with Zapier and OpenAI every day to handle intelligent additions of contacts into Salesforce.

Day-to-day, this tool has been incredible. At this point I am saving 2-3 hours a month entering data as I have continued to scale my outreach efforts with DataTools Pro. I have not expanded on my MVP or shared it until recent demos raised interests.

GMail and Salesforce

For now, if you want to test the solution or need help setting it up or want to explore adapting this for other scenarios like creating service tickets or Outlook integration, feel free to contact me or better yet email me so I can show you how your email flowed into my CRM! ryan@datatoolspro.com


Why we love automating knowledge retention with Zapier and DataTools Pro

At DataTools Pro, we’re always on the lookout for ways to streamline processes and retain knowledge to feed into our AI brain! We are obsessed with hacking cross team knowledge which is why we have chosen to innovate new ways to manage metrics with Zapier and DataTools Pro. Zapier has become an essential tool for automating our own internal workflows and ensuring that our team is always in sync with as little human intervention as possible.

Zapier: Automating Knowledge Flow Across Apps

Zapier is a powerful workflow automation platform that connects over 4,000 apps, allowing us to create seamless data flows without custom coding. For us, this means we can push critical metrics from various sources into DataTools Pro with just a few clicks. Whether it’s Salesforce, Google Sheets, or Tableau, Zapier helps ensure that all of our metrics definitions and changes are automatically centralized in one place: our Metrics Glossary in DataTools Pro.

This process not only saves time but also ensures that our knowledge retention efforts are smooth and consistent across all platforms.

How We Use Zapier internally at DataTools Pro

Lead Intake and Activation Funnel

Internally, we’ve integrated Zapier to manage our intake, activation and onboarding of DataTools Pro users across our website, app, and Salesforce. With Zapier we are running an ultra simple Salesforce org where our business process flow for lead intake exists in Zapier, not Salesforce.

As a result of our approach:

  1. We don’t have dupe lead problems
  2. All web forms and activities are captured and retained
  3. Our marketing automation – emails are aligned and captured
  4. Our entire end to end activation journey across 4 disparate clouds are in sync with clean data
  5. Our Salesforce management and development costs are extremely low.
  6. Returning users, customers, and prospects are routed and logged as activities

Risk we acknowledge

Zapier is a single point of failure to connect prospects and clients to activation. However, Zapier has sophisticated logging, debugging, alerting and replay capabilities, that you need to properly manage your onboarding funnel. There is no concept of “build and pray” that our critical pipelines don’t fail at DataTools Pro.

Metrics Management

We have just started scratching the surface of our brand new Zap for DataTools Pro, allowing our users to connect any app into Zapier. The first iteration of this integration allows Zapier to push metrics directly into our centralized Metrics Glossary. The flexibility of Zapier’s workflows will ultimately allow us to synchronize new metrics across knowledge management platforms. DataTools Pro will handle monitoring, change management and integration across business and analytics teams. Zapier handles distributing that knowledge to the productivity tools that you are already using!

A Simple, Powerful Approach to Knowledge Retention

By connecting our Metrics Glossary to Zapier, we’ve removed a significant pain point: the manual labor of gathering and syncing information across platforms. This automation gives us more time to focus on what matters—delivering value to our customers. With Zapier handling movement of data, our team can stay razor focused on driving education, utility and value to our DataTools Pro users. The next horizon for us is fully automating our metrics, roadmap, prioritization, and knowledge distribution as we ship DataTools Pro features!