The BITZ unibz fablab is a Community Workshop for hobbyist, researchers and students in Bozen

GOG exhibition

GOG exhibition

GOG Gäste – Ospiti – Guests IS BACK!

The Faculty of Design and Art opens its doors to present the semester projects developed in its coursesđź’Ą

Exhibition is open to everyone!
Friday, 13.06. from 6 PM Saturday, 14.06. from 11 AM to 5 PM

Come by and enjoy some young spirited creativity!

#unibz #gog #design #art #artanddesign #facultyofdesignandart #exhibition #bolzano

Sew your bucket hat

Sew your bucket hat

We have something for all sewing fans! This time you can discover sewing by making and customizing a REVERSIBLE BUCKET HAT 👒 Cool right? Learn to sew and protect yourself from the strong summer sun! 🌞  

Bring your fabric and any special accessories you want to add. ✂️  
📆 Date: 17 + 18 June  
⏰ Time: 2 – 5 pm  
📍 Location:  
BITZ unibz fablab 
Via Rosmini 9  
Bolzano  

💰 10€/Person  
💸 Free for Students  

📩 Registration via email: bitzfablab@unibz.it  

AI Course: Learn ComfyUI

AI Course: Learn ComfyUI

Date: 30 Mai; 6, 13, 20 June;
Time: 2-5 pm

BITZ unibz fablab now offers a course to learn ComfyUI,
a powerful free and open-source software for AI-powered image, video, 3D-model, sound, voice, text… generation.

The most powerful application for generative AI, open-source and node-based!
ComfyUI gives you more control and creative freedom than any other tool!

  • Visual control: Unlike many other AI- tools, ComfyUI allows for a modular, visual workflow – ideal for designers and artists who want to directly influence image composition and style.
  • Open Source = Open Mind: You’ll not only learn how to use a tool, but also understand how AI image generators work – with no black box.
  • Future skill: AI is becoming increasingly important in the creative industry – from concept art and fashion to media and product design. With ComfyUI, you’ll gain both technical know-how and creative freedom.

What to expect

  • Introduction to AI image generation with Stable Diffusion
  • Hands-on work with ComfyUI
  • Creative mini-projects & experiments
  • No prior knowledge required

 

Sign up now!
Course is free and open for everyone!
Spaces are limited!

Registration mail to: Stephan.Pircher@unibz.it
Let’s explore together how art and technology can open up new paths to creativity.
We look forward to seeing you at Bitz!

 

—————————————————–

 

Learn more about ComfyUI:

Official website: comfy.org

ComfyUI Beginner tutorial: https://openart.ai/workflows/academy

Showreel AI 2024: https://youtu.be/Y5Isghixqq4?si=Of2W8eAjlbypuR1e

 

This is a list of some possible ComfyUI Workflows

  1. Creative Workflows
  • Text-to-Image (TTI): Generate an image from a text prompt
  • Image-to-Image (ITI): Use an image as a template for variations
  • Inpainting: Selectively replace specific areas of an image with new content
  • Outpainting: Expand an image beyond its original borders
  • ControlNet Variations: Achieve precise control through additional image information
  • Pose-to-Image: Human pose determines the content of the image
  • Depth-to-Image: Depth maps influence the structure of the image
  • Scribble-to-Image: Use a sketch as the foundation for the image
  • Canny-to-Image: Use edge detection images as a control element
  • Style Transfer: Apply the style of one image to another
  • QR-Code Diffusion: Creatively and legibly embed QR codes into images
  • LoRA Mixing: Combine multiple style models
  • Model Merging: Merge two or more AI models
  • Prompt Matrix: Automatically combine and compare multiple prompts
  • Upscaling: Upscale images with AI-powered sharpening
  • AnimateDiff: Combine individual images into animations
  • Prompt Morphing: Smoothly blend between prompts
  • GIF Creation: Generate animated sequences from created images
  1. Technical & Experimental Workflows
  • GLIGEN: Place objects precisely using bounding boxes
  • T2I Adapter: Alternative to ControlNet with lower data requirements
  • Noisy Latent Composition: Combine multiple image subjects into one
  • Semantic Segmentation: Analyze images into semantic zones
  • Normal Map Generation: Create structural information for 3D applications
  • Depth Map Generation: Generate depth information for VR/3D output
  • Audio-to-Image: Use sound or music to influence image generation
  • Terminal Node (Shell, Python): Run arbitrary scripts directly
  • Prompt Comparison Automation: Automatically compare text prompts
  • Batch Generation from JSON/CSV: Data-driven image production
  1. Language, Text & Voice
  • Prompt Engineering via LLM: AI creates or improves prompts
  • Captioning with CLIP + LLM: Automatically describe images
  • TTS (e.g., Bark, ElevenLabs): Convert text to natural speech
  • STT (e.g., Whisper): Convert speech to text
  • Voice Cloning (RVC): Imitate voices from a few samples
  • Voice-to-Image: Generate images from spoken text
  1. 3D & AR/VR Workflows
  • TTI + Depth2Mesh: Turn text descriptions into 3D models
  • Normal2Geometry: Convert normal maps into geometry
  • NeRF: Reconstruct 3D models from 2D images
  • Volumetric Rendering: Create 3D light volumes for realistic depth
  • SMPL Avatars: Human avatars with movement/poses
  • Multi-View Consistency: Generate consistent images from different viewpoints
  • 360° Rendering: Create spherical panoramas or VR images
  • ComfyUI + Unity Integration: Control interactive 3D scenes
  1. Utility & Automation
  • Scheduled Generation: Automate generation at set intervals
  • Random Style Switcher: Apply styles randomly
  • File-Watcher Nodes: Respond to file changes in the system
  • EXIF/Metadata Injection: Automatically insert image metadata
  • Image Comparison (MSE, SSIM): Measure differences between images
  • Template-based UI Mockups: Automatically populate design templates
  1. High-End / Advanced Workflows
  • Dreambooth Training / LoRA Training: Train your own models from images
  • Real-Time Video2Video: Transform live camera images with AI
  • Stable Video Diffusion (SV3D): Create moving scenes from images and prompts
  • Arduino Control via ComfyUI: Use sensor data to control AI output
  • Robot Integration: Connect generative AI with the physical world

Multi-User Queue Server: Multi-user management with task queues

TERRAVASO – WORKSHOP

TERRAVASO – WORKSHOP

Date:
10.6. 2 – 4 pm or 4:30 – 6:30 pm

Discover the potentiality of a soil-based material!
Join us on the Terravaso journey and learn how to create your own
soil-based plant pot.

Registration via email
bitzfablab@unibz.it

Costs
Free for students
20€ for externals

Exhibition at Fablab – unibz x we ​​are menders – 28 May

Exhibition at Fablab – unibz x we ​​are menders – 28 May


You are all invited to the inauguration.
Bitz unibz FabLab – Via Antonio Rosmini, 9, Bolzano
Wednesday, May 28, 2025
From 17:00 to 19:00

The collaboration between WE ARE MENDERS ESF3_H1_0082 and the students of the course of DesignArt from the Libera Università di Bolzano is called “Zusammen – Upcycling Together!”.

Dalla collaborazione tra WE ARE MENDERS ESF3_H1_0082 e gli studenti del corso di laurea DesignArt della Libera Università di Bolzano, è nata una mostra “Zusammen – Upcycling Together!”.

Il progetto ESF3_h1_0082 We are menders è realizzato con il cofinanziamento dell’Unione Europea e del Programma FSE+ 2021-2027 della Provincia autonoma di Bolzano.

Das Projekt ESF3_h1_0082 We are menders wird mit Kofinanzierung der Europäischen Union und des ESF+ Programms 2021-2027 der Autonomen Provinz Bozen durchgeführt.