The BITZ unibz fablab is a Community Workshop for hobbyist, researchers and students in Bozen

Date: 30 Mai; 6, 13, 20 June;
Time: 2-5 pm

BITZ unibz fablab now offers a course to learn ComfyUI,
a powerful free and open-source software for AI-powered image, video, 3D-model, sound, voice, text… generation.

The most powerful application for generative AI, open-source and node-based!
ComfyUI gives you more control and creative freedom than any other tool!

  • Visual control: Unlike many other AI- tools, ComfyUI allows for a modular, visual workflow – ideal for designers and artists who want to directly influence image composition and style.
  • Open Source = Open Mind: You’ll not only learn how to use a tool, but also understand how AI image generators work – with no black box.
  • Future skill: AI is becoming increasingly important in the creative industry – from concept art and fashion to media and product design. With ComfyUI, you’ll gain both technical know-how and creative freedom.

What to expect

  • Introduction to AI image generation with Stable Diffusion
  • Hands-on work with ComfyUI
  • Creative mini-projects & experiments
  • No prior knowledge required

 

Sign up now!
Course is free and open for everyone!
Spaces are limited!

Registration mail to: Stephan.Pircher@unibz.it
Let’s explore together how art and technology can open up new paths to creativity.
We look forward to seeing you at Bitz!

 

—————————————————–

 

Learn more about ComfyUI:

Official website: comfy.org

ComfyUI Beginner tutorial: https://openart.ai/workflows/academy

Showreel AI 2024: https://youtu.be/Y5Isghixqq4?si=Of2W8eAjlbypuR1e

 

This is a list of some possible ComfyUI Workflows

  1. Creative Workflows
  • Text-to-Image (TTI): Generate an image from a text prompt
  • Image-to-Image (ITI): Use an image as a template for variations
  • Inpainting: Selectively replace specific areas of an image with new content
  • Outpainting: Expand an image beyond its original borders
  • ControlNet Variations: Achieve precise control through additional image information
  • Pose-to-Image: Human pose determines the content of the image
  • Depth-to-Image: Depth maps influence the structure of the image
  • Scribble-to-Image: Use a sketch as the foundation for the image
  • Canny-to-Image: Use edge detection images as a control element
  • Style Transfer: Apply the style of one image to another
  • QR-Code Diffusion: Creatively and legibly embed QR codes into images
  • LoRA Mixing: Combine multiple style models
  • Model Merging: Merge two or more AI models
  • Prompt Matrix: Automatically combine and compare multiple prompts
  • Upscaling: Upscale images with AI-powered sharpening
  • AnimateDiff: Combine individual images into animations
  • Prompt Morphing: Smoothly blend between prompts
  • GIF Creation: Generate animated sequences from created images
  1. Technical & Experimental Workflows
  • GLIGEN: Place objects precisely using bounding boxes
  • T2I Adapter: Alternative to ControlNet with lower data requirements
  • Noisy Latent Composition: Combine multiple image subjects into one
  • Semantic Segmentation: Analyze images into semantic zones
  • Normal Map Generation: Create structural information for 3D applications
  • Depth Map Generation: Generate depth information for VR/3D output
  • Audio-to-Image: Use sound or music to influence image generation
  • Terminal Node (Shell, Python): Run arbitrary scripts directly
  • Prompt Comparison Automation: Automatically compare text prompts
  • Batch Generation from JSON/CSV: Data-driven image production
  1. Language, Text & Voice
  • Prompt Engineering via LLM: AI creates or improves prompts
  • Captioning with CLIP + LLM: Automatically describe images
  • TTS (e.g., Bark, ElevenLabs): Convert text to natural speech
  • STT (e.g., Whisper): Convert speech to text
  • Voice Cloning (RVC): Imitate voices from a few samples
  • Voice-to-Image: Generate images from spoken text
  1. 3D & AR/VR Workflows
  • TTI + Depth2Mesh: Turn text descriptions into 3D models
  • Normal2Geometry: Convert normal maps into geometry
  • NeRF: Reconstruct 3D models from 2D images
  • Volumetric Rendering: Create 3D light volumes for realistic depth
  • SMPL Avatars: Human avatars with movement/poses
  • Multi-View Consistency: Generate consistent images from different viewpoints
  • 360° Rendering: Create spherical panoramas or VR images
  • ComfyUI + Unity Integration: Control interactive 3D scenes
  1. Utility & Automation
  • Scheduled Generation: Automate generation at set intervals
  • Random Style Switcher: Apply styles randomly
  • File-Watcher Nodes: Respond to file changes in the system
  • EXIF/Metadata Injection: Automatically insert image metadata
  • Image Comparison (MSE, SSIM): Measure differences between images
  • Template-based UI Mockups: Automatically populate design templates
  1. High-End / Advanced Workflows
  • Dreambooth Training / LoRA Training: Train your own models from images
  • Real-Time Video2Video: Transform live camera images with AI
  • Stable Video Diffusion (SV3D): Create moving scenes from images and prompts
  • Arduino Control via ComfyUI: Use sensor data to control AI output
  • Robot Integration: Connect generative AI with the physical world

Multi-User Queue Server: Multi-user management with task queues