s_mcleod, 8 months ago M.2 NVMe -> PCIe x16 + some dodgy cabling = Tesla P100 eGPU 😂 Combined with my RTX3090 I can load Q4/Q5 70b models 100% into vRAM with exllama or autogpqt #LLM #AI #ML #Llama #Nvidia #GPT image/png image/jpeg image/png
M.2 NVMe -> PCIe x16 + some dodgy cabling = Tesla P100 eGPU 😂
Combined with my RTX3090 I can load Q4/Q5 70b models 100% into vRAM with exllama or autogpqt
#LLM #AI #ML #Llama #Nvidia #GPT
image/png image/jpeg image/png
s_mcleod, 8 months ago video/mp4
video/mp4
decryption, 8 months ago deleted_by_author
deleted_by_author
s_mcleod, 8 months ago @decryption thanks bro, I haven't had to do a wheel alignment yet though 😉
@decryption thanks bro, I haven't had to do a wheel alignment yet though 😉
s_mcleod, 8 months ago (edited 8 months ago) @decryption haha danno, you've seen the massive turdo' I've bolted to it 😂 I've actually got a second one here if you know anyone that is in the market for a P100 12GB, happy to print them a fan adapter if they're a good sort.
@decryption haha danno, you've seen the massive turdo' I've bolted to it 😂
I've actually got a second one here if you know anyone that is in the market for a P100 12GB, happy to print them a fan adapter if they're a good sort.
Add comment