Transform your marketing with the dynamic capabilities of large format printing in Brisbane. Our services cover signs, banners, posters, and more, making your message larger than life. Discover the impact of oversized visuals with us.
#game dev is a serious effort. it takes 100 kg of dynamite to make a game. it takes 20,000 #large stones to make a game. you need to be sealed in machine for 40 years to make a gamee. if you dont do this i dont want your #opinions
Gain exclusive insights into the synergy of AI and technology in our AITech Interview with Ulf Zetterberg, Co-CEO at Sinequa. Explore the visionary perspectives of a leader driving innovation at the intersection of AI and business.
'Man the Hunter is dead!'
Supposedly, because a new study shows #women doing some #hunting across almost all hunting societies. That sure doesn't mean men DON'T hunt or that men's contribution doesn't matter.
#Hadza women (and children) will opportunistically take small animals -- they can #scavenge off lion kills! But when it comes to #large game -- that's men.
TLDR We trained a series of 7B LLMs named XGen-7B with standard dense attention on up to 8K sequence length for up to 1.5T tokens. We also fine tune the models on public-domain instructional data. The main take-aways are: * On standard NLP benchmarks, XGen achieves comparable or better results
Thought I'd ask and see if y'all are familiar with upcoming models or techniques that I'm not. I'm aware of the MPT 7B storywriter model and the RWKV models that support up to 8192 tokens, but that's about it as far as "long" context lengths go. I'm also wanting to run this in a VM with limited resources. The most I will be...
⏱️ Time of discovery 🔭 of #asteroids ☄️ which passed within 1 #lunar distance from #Earth in 📆 2023
After closest approach 37.90%
< 24 hours before 20.70%
up to 7 days before 34.50%
> one week before 6.90%
> 7 weeks before 0.00%
> one year before 0.00%
#Meteorite fall to #Ontario, #Canada 📆 19 Nov 2022. #Radar signatures appear from ~15km altitude down to 850m. Most of this fall landed in Lake Ontario but small masses might be found east of Grimsby and #large masses might have landed near McNab https://youtu.be/ARpFSANDopQ
Unleash the Power of Large Format Printing in Brisbane: Signs, Banners, Posters, and Beyond
Transform your marketing with the dynamic capabilities of large format printing in Brisbane. Our services cover signs, banners, posters, and more, making your message larger than life. Discover the impact of oversized visuals with us.
Mistral 7B Released Under Apache 2.0 (mistral.ai)
From their website...
AITech Interview with Ulf Zetterberg, Co-CEO at Sinequa. (ai-techpark.com)
Gain exclusive insights into the synergy of AI and technology in our AITech Interview with Ulf Zetterberg, Co-CEO at Sinequa. Explore the visionary perspectives of a leader driving innovation at the intersection of AI and business.
Long Sequence Modeling with XGen: A 7B LLM Trained on 8K Input Sequence Length (blog.salesforceairesearch.com)
TLDR We trained a series of 7B LLMs named XGen-7B with standard dense attention on up to 8K sequence length for up to 1.5T tokens. We also fine tune the models on public-domain instructional data. The main take-aways are: * On standard NLP benchmarks, XGen achieves comparable or better results
OC Long Context Lengths (And Low Resource Friendly)
Thought I'd ask and see if y'all are familiar with upcoming models or techniques that I'm not. I'm aware of the MPT 7B storywriter model and the RWKV models that support up to 8192 tokens, but that's about it as far as "long" context lengths go. I'm also wanting to run this in a VM with limited resources. The most I will be...
OC On Stochastic Parrots, Large Language Models, and Where We're Heading (jordanwages.com)
A reflection on neural networks, the human brain, and what we'll see from AI soon.