My thesis explores text-to-image software in Filipino architecture, aiming for realistic representation. The impact of biases in AI programs generating architectural images remains under-researched. Text-to-image software has gained popularity in architecture for creating imaginative designs, but concerns exist about biases in AI-generated images, especially when non-western contexts are used, reinforcing negative stereotypes. I explored text-to-image software in Filipino architecture, aiming for realistic representation. While early architectural AI, like cybernetics, was explored by architects like Cedric Price and Nicholas Negroponte, recent text-to-image software lacks critical examination of its dataset, posing problems. Some architects, like Shelby Doyle, have challenged biases in AI, revealing sexist stereotypes. I held workshops with Filipinos (from the Philippines and Canada) to discuss images generated using Stable Diffusion and Midjourney software. The process involved crafting prompts related to community spaces in English and Tagalog. Findings show AI images reflect negative cultural biases due to a predominantly western dataset, particularly affecting the Filipino community. Gathering inputs and critiquing images can help expand the dataset, reducing negative biases and better representing underrepresented communities.