Jinki Jeong's picture
8 21

Jinki Jeong

Anserwise

AI & ML interests

None yet

Recent Activity

reacted to SeaWolf-AI's post with πŸ”₯ about 13 hours ago
πŸ”₯ 128 Blackwell GPUs β€” Thank You, Hugging Face I've been awarded 128 NVIDIA Blackwell GPUs through NIPA (Korea's National IT Industry Promotion Agency). Sharing this here first β€” because Hugging Face is where it all started. I design LLM architectures from scratch. HF was my lab β€” dissecting Transformers internals, analyzing thousands of checkpoints, iterating on Spaces with global feedback. Our FINAL Bench reached #5 globally in HF dataset popularity, and this research is exactly what earned the GPU grant. πŸ‘‰ https://huggingface.co/spaces/FINAL-Bench/Leaderboard These 128 Blackwells will scale AETHER-Net β€” our Proto-AGI architecture (Emergence Engine Β· Meta-Cognition Β· SLAI Β· Multi-Intelligence Β· Synergy & Critique) β€” validated at 0.8B with MoE expansion to 2.1B params. Next stop: 166B. People I must thank: @John6666 β€” Guardian of this ecosystem. Never misses a forum question, interested in every project, active 24/7. I've genuinely wondered if you're a machine. Remarkable. @bartowski β€” Master of quantization. The hidden infrastructure of open-source LLM. Countless experiments possible thanks to you. @SaylorTwift β€” You see what others miss. Insight that cuts to the essence. Deep respect. My promise: AETHER-Net design docs, training recipes, checkpoints, and failure logs β€” all shared here openly. πŸ€— Thank you, Hugging Face. Let's turn the next page together. πŸš€ vidraft Β· VIDRAFT #OpenScience #HuggingFace #ProtoAGI #AETHER #LLMArchitecture #Blackwell #NIPA
reacted to SeaWolf-AI's post with 🀝 about 13 hours ago
πŸ”₯ 128 Blackwell GPUs β€” Thank You, Hugging Face I've been awarded 128 NVIDIA Blackwell GPUs through NIPA (Korea's National IT Industry Promotion Agency). Sharing this here first β€” because Hugging Face is where it all started. I design LLM architectures from scratch. HF was my lab β€” dissecting Transformers internals, analyzing thousands of checkpoints, iterating on Spaces with global feedback. Our FINAL Bench reached #5 globally in HF dataset popularity, and this research is exactly what earned the GPU grant. πŸ‘‰ https://huggingface.co/spaces/FINAL-Bench/Leaderboard These 128 Blackwells will scale AETHER-Net β€” our Proto-AGI architecture (Emergence Engine Β· Meta-Cognition Β· SLAI Β· Multi-Intelligence Β· Synergy & Critique) β€” validated at 0.8B with MoE expansion to 2.1B params. Next stop: 166B. People I must thank: @John6666 β€” Guardian of this ecosystem. Never misses a forum question, interested in every project, active 24/7. I've genuinely wondered if you're a machine. Remarkable. @bartowski β€” Master of quantization. The hidden infrastructure of open-source LLM. Countless experiments possible thanks to you. @SaylorTwift β€” You see what others miss. Insight that cuts to the essence. Deep respect. My promise: AETHER-Net design docs, training recipes, checkpoints, and failure logs β€” all shared here openly. πŸ€— Thank you, Hugging Face. Let's turn the next page together. πŸš€ vidraft Β· VIDRAFT #OpenScience #HuggingFace #ProtoAGI #AETHER #LLMArchitecture #Blackwell #NIPA
liked a Space 1 day ago
FINAL-Bench/Gemma-4-Multi
View all activity

Organizations

None yet