🖤💛❤️

Site in development

Use this site!

The Sun
A piece of the skybox
Head spriteBodirectionY spriteLeft arm spriteRight arm spriteLeft leg spriteRight leg sprite

Hey!

đź‘‹

I'm Devon Crebbin

Creative Technologist

Ourland (land.org.au)

Indigenous Digital Map & Non Profit

Ourland is an Indigenous Corporation & Non Profit Charity (status in-progress) to provide an educational resource for people to learn about Australian Indigenous land, languages & place names.

It’s main goal is to prevent the extinction of Indigenous cultural data & languages.

I’m currently the only director & person on the project but will be hugely ramping up efforts on this due to it’s potential positive impact.

AIOR

AI Orchestrated Research

I wrote a blog post about using AIOR to research 1201 Indigenous dialects in 36 minutes: Check it out!

AIOR is a researching tool for datasets where you specify an expected data model alongside user parameters.

AIOR will then use that information to perform autonomous researching tasks to retrieve the data you request and output in both a table and JSON data structure.

This can then be downloaded to be used in a spreadsheet tool or used programmatically via JSON.

I created this tool in order to more efficiently retrieve datasets to be used with Ourland (land.org.au) to help prevent the extinction of Indigenous languages.

Meta Glasses GPT Vision API

Hacky API for the Meta Glasses to use GPT Vision

This project uses the latest version of the Meta Rayban Smart Glasses and the GPT vision API to automatically log your food via a voice command.

I achieved this by exploiting the Meta Glasses ability to send photos to your friends on Messenger via a voice command. i.e: “Hey Meta, send a photo to John blahblah”.

All I needed to do, was to create a fake Facebook account (sorry fb TOS) that sounded similar to “My Food log” and have a listener attached to that account which would trigger anytime I would send a photo to it.

From there I sent the photo to OpenAI and stored the response in a fake myfitnesspal web app I made for this demo (due to no MFP public API)

This project went semi-semi viral on Hacker News and in turn got me noticed by some people from the Meta Reality Labs team who then offered me an interview.

I had never done leet-code before and so for this particular opportunity the timing did not work out.

I will defeat DSA eventually though!

langpal

Mandarin & Cantonese phrasebook

langpal is a modern phrasebook for character based languages, based on Anki. It allows a user to create custom phrase-lists to then practice both their listening & speaking skills using advanced text to speech generation & character by character pronunciation analysis.

I made it to ease my social anxiety & fear when preparing to speak either Mandarin or Cantonese out in the public

For this project I’ve been getting help from Dr Chaak-Ming Lau a Cantonese linguistics PhD and the creator of words.hk.

With his help I developed Jyutping for Flutter - a pub.dev package that allows for Chinese character to Jyutping conversion. From a stylistic angle, this project was influenced by a fun graphic design journey I went on with exploring a new take on Alegria/Corporate Memphis.

This project will also be the basis of a talk I will be doing at Google Melbourne for Monash Universities GDG Student Group, in which students will be able to apply to get the flutter source code.

The project also leverages Flutter “Code Push” by Eric Seidel (Creator of Flutter) & his team at Shorebird.

Gemini Coach

AI (pseudo) HR Tool for Googlers

This is a conversational AI that combines listening, speech & a brain - via VertexAI

As part of a series of projects based on combing these mediums, I created this version try out the new Angular 17 as well as the experience developing for Gemini & VertexAI APIs.

This project was also the basis of a talk I did at Google Melbourne for a GDG event.

The talk was about how to create an immersive AI experience.

From a high level, these conversational AI tools are still in their infancy and we haven’t had enough time to properly explore the optimal medium for interacting with these tools.

The talk is available here - I am very out of practice with public speaking but was a fun event to be at.

There’s still a lot more to add to this project, like; Retrieval Augmented Generation (RAG) via LangChain, using Gemini 1.5 for better conversations and a long term vector storage solution to remember each individual.

Fun fact: I named it Gemini Coach a week before the Bard rebrand!