Skip to main content

 About Ryan Goodman

Ryan Goodman has been in the business of data and analytics for 20 years as a practitioner, executive, and technology entrepreneur. Ryan recently created DataTools Pro after 4 years working in small business lending as VP of Analytics and BI. There he implanted an analytics strategy and competency center for modern data stack, data sciences and governance. From his recent experiences as a customer and now running DataTools Pro full time, Ryan writes regularly for Salesforce Ben and Pact on the topics of Salesforce, Snowflake, analytics and AI.

My Token Spend is up 500%: What I learned to Manage Claude AI Cost

Pacman AI Token Chomper

I have watched my Claude AI cost climb month over month for several months in a row. Sharing token burn is like sharing how many lines of code you write; It is a meaningless statistic. My AI token burn is where the work has taken me. Along the way, I have figured out how to buy back my time. There are other areas where I am chomping through tokens like Pac-Man with no real value.

Claude AI Cost

Accelerated Spend with Parallel and Self-Spawning Agents

Time and money are the two bottlenecks I run into now. The work is no longer gated by what I can think of. It is gated by how fast and how cheaply I can get an agent to do it. . Once you start spawning agents that spawn other agents, you stop thinking in monthly cost and start thinking in spend to yield. The question I am grappling with is not how many tokens I burn… It’s what the spend gives me back in cost avoidance and return on investment.

Self-spawning agents are exactly what they sound like. You give one agent an objective, it spins up its own multi-turn sub-processes handle each job, the same way a team tackles a problem. A research task that used to be one chat session becomes a tree of conversations, each one consuming context, calling tools, and writing output the parent agent then has to read. It feels nicer to watch and the output can be excellent, but if your instructions are not specific enough, you end up paying for a lot of wasted turns and dead-end transactions.

Shift from 1:1 to 1:n Agents

A back and forth chat session with Claude or ChatGPT will not burn many tokens. The average user does not come close to their daily limit. That changes the moment your workflow calls for parallel work. On any given afternoon I have 2-5 screens flickering with LLMs handling a variety of workloads. Each with their own context window, their own tool calls, their own MCP overhead. The math is no longer one user times one model. It is one user times n agents times however many turns each one takes. That is where the bill grows fast.

Breadth of Adoption and Competency

The list of tasks I hand off to AI is not the same list it was six months ago. Research, document drafting, then data work, and dev operations, orchestration. The more confident I get, the more domains I throw at it, the longer I let agentic workflows run. I hand off work before I go to sleep and that runs form 1-3 hours.

Tips to Control Claude AI Cost and Risk

Not Every Task Needs the Best Model

Do not use Opus 4.7 for basic tasks. You are lighting money on fire. Opus is the most capable and the most expensive. Save it for work where reasoning quality actually matters. Architecture decisions, hard debugging, sensitive writing. Sonnet handles the bulk of normal work just fine. Haiku is plenty for cleanup, formatting, search, simple extractions, and high-volume small tasks. Match the model to the difficulty of the job. If the difference in output quality is not visible to a human, you are paying a premium for nothing.

Narrow the Scope of Connections and MCPs to What You Need

This recommendation may be obsolete in a few months as AI tools get more efficient, but right now it matters. I noticed I was hitting my daily limits in minutes when I had dozens of MCPs wired up. Every MCP loads its tool definitions into the context of every turn. That overhead is paid before the agent does any real work. Turn off what you are not using. Build task-specific configurations. An agent does not need access to your CRM, your calendar, your codebase, and your design tool all at once if the job is to summarize a Slack thread.

Clear System Prompts

System prompts run on every turn, so a bloated one taxes you forever. For day to day chats I ask for the shortest possible response and I get it. For projects and agents, I write a system prompt that is short, specific, and tells the model exactly what good output looks like. A vague system prompt makes the model guess, and guessing produces longer responses, more retries, and more tokens. Specificity is cheap. If you do need a long system prompt, lean on prompt caching. It lets the provider reuse the prompt across turns at a fraction of the per-token cost, which makes the difference between a system prompt that taxes you and one that does not.

Fight the FOMO Urge

Every three months the goal posts move. A new model, a new framework, a new set of best practices, and new tools. Whatever the best and coolest tool has today will be common and widely available in three to six months. Chasing every release is its own form of waste. Now, I only adopt products that allow me to pivot models and offer MCP. I never adopted Claude code, and stuck with Cursor. Pick the stack that makes you productive right now and let the innovation round robin come to you.

Retain Human Oversight and Control

The operator is still liable for the work product. That does not change because there are five agents in the loop instead of one. If you are producing 10x your own capacity, and you are doing it in domains, subjects, and technical areas you do not understand, you are creating risk. Speed without judgment is a recipe for shipping an opinionated, wrong answer.

Claude AI Cost should Move the Needle

Any multi-step agent / AI driven process, I require a detailed execution plan. I read that plan and that time investment has prevented waste and risk. Typically those plans, when executed should take an hour of my time and save 3-5 hours. Otherwise, it’s not worth the effort and risk.

Getting Started with Snowflake CoCo (Cortex Copilot)

Snowflake CoCo

Snowflake has been evolving quickly over the least year with it’s Cortex AI offering. Snowflake CoCo is Cortex Copilot. It’s one of the clearest examples of Snowflake embracing a modern co-pilot approach that works incredibly well. They embraced several functions that I cover in my anatomy of a modern copilot article.

Instead of exporting data into external AI tools or building complicated integrations, you can now interact with your Snowflake data using natural language. The AI assistant lives directly inside the platform and works against the data already stored in your warehouse.

Snowflake CoCo

What Cortex Copilot Actually Does

At its core, Cortex Copilot provides a natural language interface to Snowflake. The formal Snowflake CoCo documentation covers what is supported, and I admit I haven’t read it! I jump in, ask logical questions for real production problems and I get correct answers 90%+ of the time.

Off the top of my head, here are the tasks I have successfully tested CoCo that felt frictionless.

  • Validating queries multiple versions of queries
  • Setting up a new DBT project
  • Migrating views and materialized views to DBT models
  • Troubleshooting broken SQL
  • Granting permissions and RBS auditing tasks
  • Reviewing and troubleshooting YML for semantic models
  • Advanced searching based on table / view structure
  • Text to SQL
  • SQL diff comparison
  • Validating results between queries

Why This Matters for Data Teams

Most companies have invested heavily in building modern data stacks. Data warehouses, pipelines, and analytics tools are already in place. The pace of innovation from Snowflake has moved at a rate that is impossible to keep up with. Cortex provides a level playing field where new features, documentation, and best practices for using Snowflake, DBT, and other integrations has been packaged up as skills by the Snowflake team.

AI Where the Data Already Lives

One of the biggest advantages of Snowflake Cortex Copilot is aware of schema , semantic models, administrative functions and more. As modern co-pilot it enforces role based permissions and access policies. That has has been a breath of fresh air as I invite more information workers into Snowflake Workspaces. That was something that I never would have imagined starting 2026!.

How to Enable Snowflake Cortex in Snowflake

Getting started with Cortex requires only a couple of account level configuration changes.

First, enable the Cortex analyst functionality.

ALTER ACCOUNT SET ENABLE_CORTEX_ANALYST = TRUE;

Next, allow access to the models that power the Cortex features.

ALTER ACCOUNT SET CORTEX_ENABLED_CROSS_REGION = 'ANY_REGION';

Some organizations prefer to restrict model access to a specific region. In that case the configuration can be set more narrowly.

ALTER ACCOUNT SET CORTEX_ENABLED_CROSS_REGION = 'AWS_US';

Once these settings are enabled, Cortex capabilities become available within the Snowflake platform.

Final Thoughts on Snowflake CoCo

Cortex Copilot represents a meaningful shift in how we can interact with Snowflake.

I have already wired up Snowflake Cortex Copilot CLI to work inside of Cursor. It’s not as fast, but the additional layer of planning, orchestration and micro-knowledge loops has transformed the way I work. I don’t use Claude Code, but I am sure it works the same there. If you want my template, feel free to contact me directly.

The barrier to entry to work with data is the lowest it has ever been with Snowflake CoCo! Happy coding.

Anatomy of a Modern AI Co-Pilot

Modern Co Pilot

What Actually Matters After Using AI for Production Productivity

Over the past year I have continued to find new peaks in productivity as a DataTools Pro. In this article, I break down some of the biggest unlocks I have experienced watching leaders build hyper growth businesses on the backs of well designed AI experiences. The common thread where I find the greatest productivity and rapid adoption is a well designed AI co-pilot… Work that requires human accountability, typically requires a human in the loop. Here are the tools I am using every day giving me move 2-5x faster than 2023.

  • Snowflake dev environments (Cortex Code) –
  • Repo-driven IDE workflows (Cursor)
  • Micro-Apps and prototyping (Lovable)
  • Product and Web Analytics (PostHog AI)
  • GPT / Claude chat interfaces
  • Video editing (Descript)

Breaking Down Features for Peak AI Co-Pilot Productivity

After dozens of experiments across tools I have attempted to apply lessons learned into our DataTools Pro, where we manage strategy, business semantics and metrics. Here is a framework that actually matters as I evaluate my own startups and what I adopt.


1. Multi-Turn Conversation

What it is

The ability to maintain context across iterative back-and-forth reasoning inside a session. It simulates short term cognitive continuity.

Without multi-turn, every request is stateless. With it, the AI remembers prior questions, assumptions, and constraints.

Why it matters

Real engineering work is iterative. You ask a broad question, narrow scope, introduce tradeoffs, refine logic. Multi-turn prevents constant context resets.

Example in action

  • GPT / Claude: You brainstorm architecture, refine it over 10–15 exchanges.
  • Cortex Code: You explore warehouse credit usage, then drill down into specific roles without re-briefing the account context.
  • Cursor: You modify a function, then adjust related files in follow-ups.
  • Lovable: You scaffold an app, then iteratively adjust schema and UI.
  • PostHog AI: You analyze funnel drop-offs, then pivot into retention metrics.

Multi-turn is table stakes now. But it only gives session continuity. It does not create long-term intelligence.


2. Context-Aware Reasoning

What it is

The model reasons against your environment, grounded on what you are specifically working on instead of abstract patterns based solely on the interaction itself.

  • Repository / code awareness
  • Metadata awareness
  • Change and usage logs
  • Visual awareness (screen grabs and computer vision).
  • App state (what you are doing in present moment or history).

Why it matters

This is the difference between “plausible” and “correct.”

Examples

  • Cortex Code: You ask, “Which warehouses consumed the most credits?” It generates SQL grounded in your actual Snowflake metadata.
  • Cursor: It refactors across your actual repo instead of hallucinating file names.
  • Lovable: It understands the state of the generated app and adjusts components coherently.
  • PostHog AI: It queries real event data to answer product questions.
  • GPT / Claude (standalone): Context awareness is limited to what you paste in manually.

Grounded context dramatically increases reliability and reduces hallucination.


3. Self-Reflection & Iterative Reasoning

What it is

The system critiques or refines its own output instead of stopping at first completion. This is effectively a quality control layer.

Why it matters

Speed without reflection creates brittle systems. Reflection increases decision quality.

Where we’ve seen this

  • PostHog AI: Agent loops evaluate output and adjust before finalizing analysis.
  • Cursor (partial): When prompted explicitly, it can compare approaches and refactor more carefully.
  • GPT / Claude: Capable, but requires manual prompting (“critique this”).
  • Cortex Code: Typically direct generation, not built-in critique loops.
  • Lovable: Focused on generation speed over architectural reflection.

Reflection is not default behavior in most tools. It has to be engineered or prompted


5. Agent Workflows & Task Loops

What it is

The ability to break down an objective function and execute step-by-step with intermediate evaluation is how most people problem solve and execute. Agents that summarize work before execution creates a much better experiences in my opinion.

Why it matters

This shifts AI from “answering questions” to “completing tasks”; one day completing goals

Strong examples

  • Cursor: Multi-file planning and stepwise refactors.
  • Lovable: Full-stack app scaffolding from high-level instructions.
  • PostHog AI: Analytics agents running multi-step investigations.
  • Cortex Code: Less agentic, more query-focused based on questions.
  • GPT / Claude: Capable but requires manual orchestration.

This is where copilots begin to feel like collaborators instead of search engines, is when they demonstrate understanding. Breaking down problems into its smallest parts and recommending next steps is where you truly feel like you have a “co-pilot.


Exciting Innovations I’m Looking for in an AI Copilot

After running these systems in real workflows, I look for 3 capabilities will make co-pilots even more useful!

Controlled and Secured Autonomy with Safe Reversion

As AI edits files, runs queries, or executes workflows, autonomy increases. AI accesses data it shouldn’t have. How do you recover? That is the “trust layer” that needs to be engineered at every layer of your technology stack.

A mature system must provide:

  • Suggest-only mode
  • Controlled edits
  • Test execution
  • Refactor execution
  • Deterministic rollback

Trust is built through reversibility.

Cursor approaches this through diff visibility. Most others still lack robust autonomy controls.


Persistent Structured Memory

Long term cognitive continuity.. (Long-Term Cognitive Continuity). For now, I am collecting a mountain of “know how” in the form of MD files and knowledge bases across multiple domain specific tools. ChatGPT is still my favorite to recall fragments of work and reasoning.

A fun experiment is open ChatGPT and ask

What is it like to work with me? What are my top 3 strengths and what are my top 3 weaknesses.

What We’ve Learned from Lab Experiments

Embedding AI copilots into production workflows shifts the evaluation criteria. AI feels magical until you know what the output should be. That is why I look to best of breed co-pilot experiences as the guiding light for what I should working toward.

Multi-turn was the first wave. Agent workflows were the second. The next frontier is institutional intelligence where AI not only reasons in the moment, but compounds over time. That is why our investments in DataTools Pro from day 1 has been cultivating business semantics from existing systems of record (Salesforce) and systems of understanding (Snowflake, Tableau).

Stress Testing Microsoft Copilot vs Claude vs ChatGPT

AI Bakoff with MS CoPilot

This weekend, I did a real world Microsoft Copilot vs Claude vs ChatGPT bakeoff while wrapping up a lead magnet calculator. In preparation for a Microsoft call to discuss an AI Copilot rollout, I wanted some hands on experience.

The Bakeoff Workflow

  1. Take a detailed calculator requirements doc (AI generated from source code).
  2. Recreate a simplified version in Excel via prompt.
  3. Document the structure.
  4. Translate the workflow into an executive ready PowerPoint story.
  5. Use the output as preparation for a Copilot rollout conversation.

This would be a day of work for multiple people. The project was complete in less than an hour.


Phase One: Translating the App into Excel

The spreadsheet needed:

  • Clear input structure supplied by a 400 line markdown file.
  • Clean calculation logic
  • Organized output summary
  • Executive ready formatting for review and sign off
Microsoft CopilotClaudeChatGPT
Copilot fragmented the logic across multiple tabs. Inputs and outputs were not logically grouped. Structural coherence was inconsistent. If an AI tool creates cleanup work, the productivity gain erodes immediately.Claude generated a tight, single page spreadsheet. Inputs were grouped cleanly. Calculations were centralized. Outputs were summarized clearly. It felt intentional and the result was the best of the group.ChatGPT produced a multi tab structure with clear separation between inputs, logic, and results. It was operationally sound and logically organized.
It required slightly more navigation than Claude’s single page approach, but the structure held.
Microsoft CoPolot ExcelOpen AI Excel

Phase Two: Explaining the Build in PowerPoint

I have never been a fan of PowerPoint. It is a corporate time and knowledge sinkhole. My hope is one day data / knowledge management tools paired with LLMs will force PowerPoint to evolve or go away.

PowerPoint exists as a corporate knowledge artifact that memorializes a point in time. In concept that would be a great thing if the real story and context wasn’t lost in meetings and presentations where PowerPoints are delivered. Microsoft has all of the pieces to the puzzle, so I am blown away they haven’t put it all together.

Clearly this stress test wasn’t going to be transformational to my way of working… At minimum, I wanted to produce a single slide that would explain my app design workflow, to highlight how I was using AI.

  • How the idea evolved
  • How AI accelerated development
  • Where structure improved
  • Where friction was eliminated
Microsoft CopilotClaudeChatGPT
Copilot generated an image instead of an editable diagram. My issue is when shapes cannot be modified, it becomes static decoration. Even the text was encoded as text which is annoying.Claude produced a comprehensive diagram with strong narrative flow. It mapped the journey clearly and felt cohesive. Text was editable.ChatGPT generated a simpler diagram, fully editable in PowerPoint. Less polished, more modular.
Microsoft CoPilot PowerPointClaude PowerPointOpenAI PowerPoint

My findings with Microsoft Copilot so far..

Copilot’s core advantage is integration within Microsoft 365. Outlook was not part of this evaluation but I am praying when I get to the proof of value, it is the star of the show. The Excel and PowerPoint experience was underwhelming for creation. However, I did use co-pilot to evaluate and edit my Claude produced Excel. It did a great job with that task.

Adoption fails when cognitive load remains unchanged. Frustration happens when more time and cognitive load are required than the previous solution.. Without a major payoff in the form of pain reduction or value creation, its tough to recover.

Bottom line: Claude felt magical, and Copilot felt like something I experienced 18 months ago in ChatGPT. Enterprise platform alignment with Office and Azure, security, and wider distribution are real value drivers. That feeling of being behind could make this an acceptable solution.

My Strategic Criteria for Evaluating Copilot

Productivity and Communications Compression Across Microsoft 365

Copilot’s primary strategic function is to compress knowledge work inside the Microsoft ecosystem. Copilot is not designed to replace core application functions; it is designed to accelerate them.

When it comes to communication (Email and Teams), my hope is Copilot will clearly increase the velocity of information consumption and delivery. If not, upcoming proof of value exercise could be short lived.

My primary objectives as I evaluate CoPilot…:

  • Streamline Email search (Gemini in GMail has been a game changer).
  • Speed to response via email
  • Shorten drafting cycles.
  • Consolidation of meeting summary tools into 1 repository.
  • Speed spreadsheet modeling
  • Automate presentation generation

Provide a secure, standardized AI layer across the organization

Security is a major concern for every operator and executive when it comes to these AI models. Copilot provides at least one controlled AI entry point with potential access to confidential data.

My Biggest Concerns as I Continue exploring

  • Training focused on value creation – Understanding the span of capabilities is important but connecting business challenges to tech is where we will create value.
  • Clear use case alignment- The gap between expectations of possibilities, and real feature availability is a concern I want to remove early.
  • Adoption Management – If users do not adopt, it is a failure. If Co-Pilot fails, we are going to do it fast and move on to the next alternative.

Without high value use cases, adoption, and education AI becomes just another data tool that blames bad data or process rather than becoming an enabler that reduces operational drag.


Final Take

AI productivity is not about who generates prettier demos. Real AI success requires distribution of knowledge and experience across a team. Data alignment and influence are about getting a group of people rowing at the same speed and in the same direction. AI is the same data activation and knowledge delivery exercise as analytics, so I feel well equipped to take it on!

There are many other bright spots for Microsoft and AI including the work I have done in Azure and recently with Power BI MCP Server at BIChart.

In this test, Claude shined the brightest. I am still excited to do a proper CoPilot proof of value and see how it goes!

The Tableau vs Power BI Rap Battle: So Cringy it’s Addictive

Rap Battle

Over Thanksgiving break, I decided to mash up the classic data-geek debate of “Tableau vs Power BI” into an AI-powered rap battle. Three rounds of diss tracks with AI on the mic.

I’ve screened it with a few folks, and the reviews so far? Fun. Cringe. Silly. Freaking awesome.

Sometimes technology takes itself a little too seriously. This is meant to be silly, and a balanced showcase of where things stand. Not since the East Coast vs. West Coast battles of the ’90s have we seen such fierce loyalty between two groups of data sense-making professionals.


Where do I stand on the debate between Tableau and Power BI?

Working at DataTools Pro where I am using Tableau daily, yet migrating Tableau into Power BI at BIChart I have preferences depending on the use case.

I choose the right solution that works best for the team, skills, investments, and what leads to the highest adoption!

I will let the community continue to debate. Check out the site, sign up for notifications over the next couple weeks, and enjoy!


Testing Salesforce External Client App with our DataTools Portal

AI Assistant

This week, I had a chance to update documentation and explore Salesforce External Client App configuration. There have been so many changes to Salesforce connected apps in terms of integration and commercial requirements. It is daunting for customers and partners.

What is Changing from Salesforce

  1. 3rd party tools that use the deprecated “Connected App” functionality will no longer gain the ability to connect to new Salesforce orgs in Spring 2026. Partners will need to upgrade, or get left behind. We are going to fork DataTools Pro app to no longer use Salesforce for federated access to DataTools Pro.
  2. Integrated apps will need to join Salesforce App Exchange where fees are collected. This is going to cause a ton of friction and headache for vendors. DataTools Pro is already integrated into the AppExchange so this does not impact us.

What about Internal Built Apps?

This is an area that’s genuinely confusing and murky, so I decided to jump right into it by building our new customer and partner portal. It sets me up where Salesforce the system of record for customers, but events and activity related data are linked only by a single external UID.

The portal integrates with our support Slack, Salesforce, OpenAI, Stripe, and the DataTools Pro app. After running this experiment, it’s easy to see why Salesforce is scrambling to control and monetize the data within Salesforce.

When you build a portal / community in Salesforce, you are building for a point in time that has passed. We have opened up our portal for anyone to login via magic link to poke around and will rollout our new DataTools Shop in 2026!

https://portal.datatoolspro.com

We are moving to a more traditional federated login configuration with Google and Microsoft / Entra, and expanding our enterprise-specific SSO support.

Learn how to Setup Salesforce External Client Apps

If you are interested in the nitty gritty details of configuring Oauth for External Client APps, I have updated our Azure DataFactory tutorial to explain the process

Our Thanksgiving 2025 Anthem: The Data Song

The Data Song

Some our work followed me into Thanksgiving weekend, as I found myself playing with AI audio on Suno. One of the first outputs from my experiments is just a small taste of what we came up with.

The Data Song will tell you something, and absolutely nothing about data at the same time!.. It’s silly and the amuse bouche to what’s next!

Building Value, Delivering Success, and Learning from Failure

Aside from silly experiments, BI migrations with BIChart is picking a lot of steam headed into 2026. Before BIChart , I have seen multiple migrations crash and burn. Automating BI Migrations is tough work and there is a lot of it happening! The BIChart assessment is proving to be quite useful for clients, long before migration is finalized. Here is a November BIChart blog roll, including a recent case study:

DataTools Pro Explorer is now in Snowflake Store

Since we started DataTools Pro in late 2023, we created more than 20 tools to help us solve a number of simple, but painful problems. The first official DataTools Pro native app is available in the Snowflake marketplace. View and Install in Snowflake for free!.

Some thoughts on “AI Readiness” Buzzword

My perspective is “AI Readiness” is just “Readiness” for the next round of disruption we are all experiencing.

Being in a business / technology role for 20 years, I know that disruption is always on the horizon. Every facet of our human to computer experience is continuously improving. Companies like Cursor and Lovable have re-defined hyper growth, reaching $100M and $200M in revenue in their first year of operation. Both AI co-piloted tools are woven into my daily work. The barriers and friction to become a creator is the lowest ever, as demonstrated by my silly video.

How this Viewpoint Impacts Work

My personal experience using generative AI has been positive when my role is steering and course correcting. I have watched generative AI transform individual contributor productivity. However, that productivity plateaus quickly when moving beyond individual contributor into group work. The promise of autonomous agents requires continuous improvement and clear feedback loops from subject matter experts. This is the lens I look through while re-working Metrics Analyst for autonomous workloads and clear feedback loops.

DataTools Pro Metrics Analyst
Upcoming Metrics Analyst 3.0

Lovable Vibe Coding: From Prototype to Production in 8 Hours

Lovable App

Lovable Vibe Coding has captivated the market, enabling “vibe coders” with the ability to prototype and deploy apps in record time. With a full book of clients and workload, I found myself with an abundance of time fighting off a horrible cold for a week. I asked and posed the question… How long would it take me to build a fully functioning app with the following features:

  1. User sign-up and authentication
  2. Integration with 3rd party service: OpenAI
  3. Control over mobile phone function (camera)
  4. Secure with storage for upload
  5. Account and subscription management
  6. Payment management (Stripe)
  7. Responsive, multi-device support (mobile and desktop)
  8. Data export to CSV
  9. Secure delivery optimized system emails

Solving a simple problem: Return Mail Rate Analysis

With no shortage of app ideas and prototypes, I decided to solve a very specific problem related to direct mail marketing: return mail management.

I decided to solve a simple analog problem. How can we quickly capture data from direct mail marketing returns? The solution to this problem was an expensive document scanner, data entry, or discarding the mail as a sunk cost. I built Returnzilla app, which allows anyone with a mobile phone to rapidly photograph the return mail. Those photos are batch converted into a data table using AI vision. That data is structured and prepared in the Returnzilla app for reporting return rates or integration into a suppression/enrichment list.

Working in 2 Hour Sprints

When I built this app, I was in bed sick, so I completed 80% of the work typing into the Lovable AI chatbot from my iPhone. In 6 hours over 2 days, I had a working app that I was sharing with friends and family. They helped me come up with the name Returnzilla!

Having a clear vision of outcomes and value is more important than knowing what features to build

If you don’t know exactly what to build or how to solve the problem, you should not start in Lovable. Instead, you should start your journey with ChatGPT or Claude. Explain the problem you are solving, and what features you are considering to solve the problem. Have the LLM break down and plan features that will make the most sense. Copy and paste that information into Lovable and let AI take the first leap forward, prototyping your app.

Returnzilla

Supabase is the unsung hero of Lovable Success

Lovable is a great front-end prototyping tool. To build an app, you need a database and middleware services to transact data to and from your database and third-party services. That is where Supabase comes in. The foundation for your app lives in Supabase:

  • Managed PostgreSQL Database: Supabase provides a fully managed PostgreSQL database, offering the power and flexibility of a relational database system with features like advanced data types, indexing, and full SQL support.
  • Authentication and User Management: It includes a comprehensive authentication system to securely handle user sign-ups, logins, and access control. It supports various methods like email/password, social logins (Google, GitHub, etc.), and multi-factor authentication.
  • Realtime Subscriptions: Supabase enables real-time data synchronization between the database and connected clients, allowing for instant updates in applications like chat or live dashboards.
  • File Storage: Supabase offers a secure and scalable file storage solution (built on S3 compatibility) for managing user-generated media like images, videos, and other files.
  • Edge Functions (Serverless): It allows developers to deploy and run serverless functions (written in TypeScript) globally at the edge, close to users, reducing latency and improving performance.

Lovable Vibe Coding Prototype vs Production

To deploy my app, I consulted an sr engineer to do a code review. Lovable and Supabase together accelerate a lot of the iterative work. As a result, the build process that took hours shaved weeks off my timeline. However, moving beyond a prototype is not as simple as publishing to the web. Even with a simple app like Returnzilla, I had to take some important steps to properly secure the app.

Lovable Vibe Coding

Lovable does provide security scans as part of the design and development process. If you have ever built an app, you know that scrubbing inputs, CSPs, and other basic app design best practices must be followed. For a basic website, like the one I built for GoodmanGroup LLC, there was not a lot of work needed to make it production-ready. The moment you start collecting data and adding gated content, it requires a login; the requirements to get to production change dramatically! I highly recommend seeking advice and oversight from an sr engineer before considering your app production-ready.

For DataTools Pro, I already have access to a number of paid services and previous work that I reused. Here is a basic list of configuration steps required to prepare for production.

Secure communication relay for email– I use SendGrid to handle my application emails.

Domain Routing and Web Application Firewall (WAF) – I moved my app to the datatoolspro domain. Cloudflare is my DNS and proxy for all web apps.

Captcha – I use Cloudflare Turnstile to handle my captcha, which helps block bots from trying to sign up or attempting to overload my forms.

File Security – When I implemented file upload in Lovable, the default setting is to leave files wide open to the internet. If you do not have experience designing signed file access from a web app, you will need help from a developer.

Access to Device Camera – I set up Returnzilla to request access to the camera, but not the photo library. Error handling and mobile/desktop device handling took some time to test and validate, and required guidance for the AI that probably would have been easier to code.

Testing and QA

Lovable does an incredible job creating clean user experiences and connecting front-end, middleware, and backend. For this app, there are very few screens and workflows, so I was able to manually unit test and system test without fuss. Knowing your way around the browser console and logs is very helpful. A more complex app will require proper regression testing, system testing, and promotion management. I moved my Lovable work to a development branch and now promote through a versioning step in GITHub.

These standard software development lifecycle procedures are just one example where you may need to make a jump from “vibe coding” prototypes to a properly managed and maintained application.

One of the challenges and cautions I provide to anyone wanting to vibe code their way into production is to stick to very simple apps or use Lovable for what it does best…prototyping.


Bottom Line: I love Lovable + Supabase

I love any product that allows me to extend my knowledge and experience to get more done. This design pattern, using AI to rapidly prototype, has ushered in a new process for our work on DataTools Pro, where I want to increase throughput by 30% by the end of the quarter.

Vibe-coded micro-apps will change the way marketers think about microsites and lead magnets. Returnzilla, at it’s core, is a lead magnet to offer data services, identity reconciliation, and full funnel analytics. Now, I have a hook to go to a segment of marketing leaders who are doing direct mail. I will report back on how it works!

If you happen to do direct mail marketing, give Returnzilla a try. The first 50 scans are free!

Adventures with Snowflake MCP and Semantic Views

Snowflake MCP and Claude

Last month, I had an opportunity to roll up my sleeves and start building analytics with Snowflake MCP and Snowflake Semantic Views. I wanted to see how far I could push real-world analyst and quality assurance scenarios with Tableau MCP and DataTools Pro MCP integration. The results gave me a glimpse of the future of AI/BI with real, production data. My objective was to deliver a correct, viable analysis that otherwise would have been delivered via Tableau.

The time spent on modeling my data, providing crystal clear semantics, and using data with 0 ambiguity helps. My results delivered great results, but I ended the lab with serious concerns over governance, trust, and quality assurance layers. This article highlights my findings and links to step-by-step tutorials.

Snowflake MCP and Claude

Connecting Claude, Snowflake MCP, and Semantic Views

The first step to connect all of the components was building my Snowflake Semantic views. Snowflake MCP gave me the framework to orchestrate queries and interactions, and using Snowflake Semantic Views gave me the lens to apply meaning. All of my work and experimentation occurred in Claude. This gave me the AI horsepower to analyze and summarize insights. To connect Snowflake to Claude, I used the official Snowflake MCP Server, which is installed on my desktop and configured in Claude.

Together, these tools created a working environment where I could ask questions, validate results, and build confidence in the answers I got back.


Creating Snowflake Semantic Views

With my Snowflake Semantic View setup, I spent some time researching and reading other folks’ experiences on semantic views. I highly recommend having a validated and tested Semantic view before embarking on AI labs. If you don’t know what metadata to enter into your Semantic View, seek additional advice from subject matter experts. AI can fill in blanks, but it shouldn’t be trusted to invent meaning without human oversight: Why AI-Generated Meta-Data in Snowflake Semantic Views Can Be Dangerous

Bottom line… Begin with a simple and concise Snowflake semantic model. Build clearly defined dimensions and measures. Use real-world aliases and refrain from using AI to fill in the blanks, unless your objective. Layer on complexity once you’re comfortable with the results.


What Worked Well

  • Control over data access
    Thankfully, the Snowflake MCP is limited to semantic views and Cortex search. The opportunity and value of Cortex search cannot be understated. I will cover that in another post. The idea of unleashing an AI agent with elevated permissions to write SQL on your entire data warehouse is a governance nightmare. Semantic Views gave me the ability to scope exactly what Claude could see and query.
  • Accuracy of results
    The top questions I get during AI labs: “Is this information correct?” I had a validated Tableau dashboard on my other monitor to validate the correctness of every answer.
  • Simple to complex questioning
    My recommendation with any LLM-powered tool is to start with high-level aggregate questions. Use these to build a shared understanding and confidence. Then, grounded on validated facts, you can drill down into more detailed questions with confidence. This approach kept me in control when the analysis moved beyond existing knowledge and available analysis.

Where I Got Stuck

Three challenges slowed me down:

  1. Metadata gaps – When the semantic layer lacked clarity, Claude produced ambiguous answers. It isn’t garbage in, garbage out problem…. It is me having a level of subject matter expertise that was not captured in my semantic layer or in a feedback loop to make the AI system smarter. LLM analysts feel less magical when you know the answers. That is where adding Tableau MCP allowed a pseudo peer review to occur.
  2. Over-scoping – When I got greedy and exposed too many columns, ambiguity crept in. AI responses became less focused and harder to trust. Narrower scope = better accuracy.
  3. Context Limits– I had Claude do a deep analysis dive. I also had it code a custom funnel dashboard that perfectly rendered a visual funnel with correct data. At some point, Claude explained that my context limit had been reached. My analysis hit a brick wall, and I had to start over. Claude is a general-purpose AI chatbot, but it was still disappointing to hit a stride and have to stop working.

Risks You Should Know

If you’re using AI to build your semantic layer, you need to be aware of the risks:

  • AI-generated semantics can distort meaning. It’s tempting to let an LLM fill in definitions, but without context, you’re embedding bad assumptions directly into your semantic layer: Why AI-Generated Meta-Data in Snowflake Semantic Views Can Be Dangerous
  • Do not give LLMs PII or Sensitive PII. As a rule of thumb, I do not add PII or sensitive PII into semantic models. I hope that at some point we can employ Snowflake aggregation rules or masking rules.
  • Governance blind spots. Connecting the Snowflake MCP requires access from your desktop. For governance, we use a personal access token for that specific Snowflake user’s account. That ensures all requests are auditable. Beyond a single user on a desktop, it’s unclear how to safely scale the MCP.
  • False confidence. Good syntax doesn’t equal good semantics. Always validate the answers against known results before you scale usage.

Final Take

Snowflake MCP and Semantic Views are still very much experimental features. They provide a glimpse of what will be possible when the barrier and access to governed, semantically correct data are removed.

In my case, I employed DataTools Pro for deeper metric glossary semantics and a writeback step via Zapier to capture learnings, re-directions, and insights for auditing purposes. If you would like assistance setting up a lab for testing, feel free to contact us to set up a complimentary session