AI in code.org
link
gZanyLvl 2
- Windows
If he sends a link to the whole forum it may crash his server. He told me in a private chat.
VarrienceLvl 25
- Edited
gZany is it using a backend as i assume, or is it using CDO's inbuilt AI feature? of which i have been meaning to look into but don't know how to structure the training data to be effective for CDO's models and or don't have any big enough csv data to throw into it for actual results
- Windows
Varrience does Code.org actually have a inbuilt AI feature?
Letti42Lvl 6
- Windows
Captain_Jack_Sparrow it does for applab
DragonFireGamesLvl 11
- Edited
Varrience It uses a backend and the image model is https://huggingface.co/spaces/stabilityai/stable-diffusion
the text model is https://huggingface.co/openai-community/gpt2
MonsterYT_DaGamerLvl 8
- Windows
H O W
DragonFireGamesLvl 11
MonsterYT_DaGamer I send a request to my server, my server sends a request to huggingface, huggingface returns the data, my server then returns the data as an image which can be parsed by the cdo project.
VarrienceLvl 25
DragonFireGames you may be able to generate results on your server end, from my results they've been pretty good at least while using nodejs
DragonFireGamesLvl 11
Varrience It was mostly a test of my new fetch()
implementation in FKEY, pay me 100â’¸ (the price set by Gabriel) and I can show you
Awards
- â’¸ 100 from
MonsterYT_DaGamer
Comment: give it NOW
VarrienceLvl 25
- Edited
DragonFireGames considering i built the fetch library lmao (well a baseline version of it anyways) seems you've vastly improved it for modular usage,
also you may be able to generate results on your server end, from my results they've been pretty good at least while using nodejs so that's most likely what i will do later on rather than key chaining
oh also update on the CDO AI model stuff very weird and can only do predictions with limited data, so generating results will most likely be very tedious with probably needing model chaining to get anywhere near the performance of other stuff
DragonFireGamesLvl 11
Varrience It's a bit different than your fetch library.
Awards
- â’¸ 1 from
Varrience
Comment: I'm aware updated my post to show after looking
DragonFireGamesLvl 11
- Edited
The main difference is that I can send and receive data easily using the new version of fetch. I also use Owokoyo's method of parsing the image which is like 100x faster. The fetched content doesn't have to be text or json either, you can also just fetch an image directly.
https://cdo-backend.onrender.com/fetch?url=<insert url here>&data=<json for 2nd parameter of fetch>
you can also add &proxy=1
if you want to just see the unencoded data using your browser
VarrienceLvl 25
DragonFireGames ah yea the biggest jump for it is probably using fromCharCode and passing through an array through it though seems like it'd be much faster, though i don't think there's really a need to fetch an image directly because then it just takes longer because it needs to pass through your proxy, unless your storing cached results which can be beneficial.... though i may update my library to include the method sooner or later
DragonFireGamesLvl 11
- Edited
Varrience fetching images is useful for the ai image gen and probably nothing else
Awards
- â’¸ 100 from
TNitro7669
Comment: Hand it over. NOW
LGM_ProductionsLvl 2
- iPhone
I would pay, but I’m too poor.
Awards
- â’¸ 1 from
DragonFireGames
Comment: lol
DragonFireGamesLvl 11
- Edited
@TNitro7669 check your private discussions
DragonFireGamesLvl 11
- Edited
A dolphin in front of a nebula:
A cat:
(give the images time to load)
Awards
- â’¸ 4.2 from
Binary_Coder
Comment: So long and thanks for all the fish
[WUT] AdamLvl 13
- Windows
@DragonFireGames give please