SCC Comets

Privacy Controls for RAG: Implementing Row-Level Security and Redaction

Privacy Controls for RAG: Implementing Row-Level Security and Redaction

Learn how to secure RAG architectures using row-level security and redaction. Prevent data leaks and PII exposure in LLM applications with a defense-in-depth strategy.
Design Systems for AI-Generated UI: Keeping Components Consistent

Design Systems for AI-Generated UI: Keeping Components Consistent

Learn how to integrate AI into your design systems using design tokens and constraint-based generation to maintain UI consistency and accessibility.
Mastering LLM Prompting for Unit Tests and Code Refactoring

Mastering LLM Prompting for Unit Tests and Code Refactoring

Learn how to use structured prompt patterns like the Recipe and Context patterns to generate passing unit tests and safe code refactors using LLMs.
Data Retention Policies for Vibe-Coded SaaS: What to Keep and Purge

Data Retention Policies for Vibe-Coded SaaS: What to Keep and Purge

Learn how to manage data retention for vibe-coded SaaS apps. Avoid GDPR fines and reduce storage costs by implementing strict data minimization in your AI prompts.
Solving Dataset Bias in Multimodal Generative AI: A Guide to Fair Representation

Solving Dataset Bias in Multimodal Generative AI: A Guide to Fair Representation

Explore how dataset bias affects multimodal generative AI, the difference between under and misrepresentation, and the latest CA-GAN techniques to ensure fair AI outputs.
Structured Prompting: How to Constrain LLM Reasoning for Better Accuracy

Structured Prompting: How to Constrain LLM Reasoning for Better Accuracy

Learn how to use structured prompting to constrain LLM reasoning, reduce hallucinations, and improve factuality through frameworks like Chain-of-Thought and DisCIPL.
Why Transformers Use Two-Layer Feedforward Networks for LLM Performance

Why Transformers Use Two-Layer Feedforward Networks for LLM Performance

Explore why the two-layer Feedforward Network is essential for LLMs. Learn how this design balances non-linearity, factual memory, and computational efficiency.
Transparency and Explainability in Large Language Model Decisions

Transparency and Explainability in Large Language Model Decisions

Explore how transparency and explainability impact large language model decisions, covering data provenance, bias mitigation, and XAI techniques for trustworthy AI.
How LSTMs Paved the Way for Transformer-Based Large Language Models

How LSTMs Paved the Way for Transformer-Based Large Language Models

Explore how LSTM networks solved the vanishing gradient problem and set the foundation for modern Transformers. This guide covers the architectural evolution from sequential processing to parallel attention mechanisms.
Agentic Generative AI: Mastering Autonomous Planning and Workflow Execution

Agentic Generative AI: Mastering Autonomous Planning and Workflow Execution

Discover how Agentic Generative AI transforms reactive chatbots into proactive systems that plan and execute multi-step workflows autonomously.
Cybersecurity and Generative AI: Threat Reports, Playbooks, and Simulations
Tess Rempel

Cybersecurity and Generative AI: Threat Reports, Playbooks, and Simulations

Explore the 2026 cybersecurity landscape where Generative AI drives both threats and defense. Learn about key risks like prompt injections, shadow agents, and how to build effective security playbooks using industry frameworks.
Building AI Chatbots and Assistants with Vibe Coding and Retrieval Systems

Building AI Chatbots and Assistants with Vibe Coding and Retrieval Systems

Learn how to build advanced AI assistants using vibe coding and retrieval-augmented generation. We cover tools like Cursor and Windsurf, practical RAG setups, and real-world troubleshooting tips.