Zero to Hero LLMs with M3 Max BEAST

  Переглядів 103,597

Alex Ziskind

Alex Ziskind

День тому

M3 Max is a Machine Learning BEAST. So I took it for a spin with some LLM's running locally.
I also show how to gguf quantizations with llama.cpp
Temperature/fan on your Mac: www.tunabellysoftware.com/tgp... (affiliate link)
Run Windows on a Mac: prf.hn/click/camref:1100libNI (affiliate)
Use COUPON: ZISKIND10
🛒 Gear Links 🛒
* 🍏💥 New MacBook Air M1 Deal: amzn.to/3S59ID8
* 💻🔄 Renewed MacBook Air M1 Deal: amzn.to/45K1Gmk
* 🎧⚡ Great 40Gbps T4 enclosure: amzn.to/3JNwBGW
* 🛠️🚀 My nvme ssd: amzn.to/3YLEySo
* 📦🎮 My gear: www.amazon.com/shop/alexziskind
🎥 Related Videos 🎥
* 🌗 RAM torture test on Mac - • TRUTH about RAM vs SSD...
* 🛠️ Set up Conda on Mac - • python environment set...
* 👨‍💻 15" MacBook Air | developer's dream - • 15" MacBook Air | deve...
* 🤖 INSANE Machine Learning on Neural Engine - • INSANE Machine Learnin...
* 💻 M2 MacBook Air and temps - • Why SILVER is FASTER
* 💰 This is what spending more on a MacBook Pro gets you - • Spend MORE on a MacBoo...
* 🛠️ Developer productivity Playlist - • Developer Productivity
🔗 AI for Coding Playlist: 📚 - • AI
Timestamps
00:00 Intro
00:40 Build from scratch - manual
09:44 Bonus script - automated
11:21 LM Studio - one handed
Repo
github.com/ggerganov/llama.cpp/
Commands
//assuming you already have a conda environment set up, and dev tools installed (see videos above for instructions)
Part 1 - manual
brew install git-lfs
git lfs install
git clone github.com/ggerganov/llama.cpp
cd llama.cpp
pip install -r requirements.txt
make
git clone huggingface.co/teknium/OpenHe... openhermes-7b-v2.5
mv openhermes-7b-v2.5 models/
python3 convert.py ./models/openhermes-7b-v2.5 --outfile ./models/openhermes-7b-v2.5/ggml-model-f16.gguf --outtype f16
./quantize ./models/openhermes-7b-v2.5/ggml-model-f16.gguf ./models/openhermes-7b-v2.5/ggml-model-q8_0.gguf q8_0
./quantize ./models/openhermes-7b-v2.5/ggml-model-f16.gguf ./models/openhermes-7b-v2.5/ggml-model-q4_k.gguf q4_k
./batched-bench ./models/openhermes-7b-v2.5/ggml-model-f16.gguf 4096 0 99 0 2048 128,512 1,2,3,4
./server -m models/openhermes-7b-v2.5/ggml-model-q4_k.gguf --port 8888 --host 0.0.0.0 --ctx-size 10240 --parallel 4 -ngl 99 -n 512
Part 2 - auto
bash -c "$(curl -s ggml.ai/server-llm.sh)"
💻 MacBooks in this video
M2 Max 16" MacBook Pro 64GB/2TB
- - - - - - - - -
❤️ SUBSCRIBE TO MY UKposts CHANNEL 📺
Click here to subscribe: / @azisk
- - - - - - - - -
Join this channel to get access to perks:
/ @azisk
#m3max #macbook #macbookpro
- - - - - - - - -
📱 ALEX ON X: / digitalix

КОМЕНТАРІ: 308
@AZisk
@AZisk Місяць тому
JOIN: youtube.com/@azisk/join
@MaxTechOfficial
@MaxTechOfficial 5 місяців тому
Keep up the good hustle, Alex! -Vadim
@AZisk
@AZisk 5 місяців тому
Thanks Vadim!
@univera1111
@univera1111 5 місяців тому
@@AZisk if I may ask, can you replicate this on a Linux or windows and see which is easier for users. Or u can just say here
@zt9233
@zt9233 5 місяців тому
@@univera1111also benchmarks
@abhishekjha9041
@abhishekjha9041 5 місяців тому
​@@AZisksir please make a video for MacBook pro specifications for Machine learnings . I'm so confused about what to buy 16inch with 30 core 96gb ram Or 16inch with 40 core 64 GB ram. Or I have to buy a m3 pro 18 core 36gb ram. I'm so confused and like me other people also so please make a separate video on that it's a request
@abhishekjha9041
@abhishekjha9041 5 місяців тому
​@@AZiskAnd I have a question that I do some research and find out that MacBook pro in Delaware have zero sales tax which means if I buy MacBook pro in 2500 dollars so I don't have to give any tax on it. It's is true sir.
@giovannimazzocco499
@giovannimazzocco499 5 місяців тому
Excellent stuff. I searched UKposts for weeks to find benchmarks of DNN models on M3. This is the first and only one I've found so far. There's is a ton of videos on video editing, graphics, gaming and music production on M3s. But for what concerns fresh material about machine learning on Apple Silicon I'm pretty convinced you're the only game in town. Keep it up. Looking forward to seeing more benchmarks.
@Kevin-hx3ci
@Kevin-hx3ci 4 місяці тому
Alex I am so happy I found your videos on UKposts because I had been looking for someone to help tutor me on tech stuff on Mac. Can’t express how helpful this has been for me.
@atldeadhead
@atldeadhead 5 місяців тому
I enjoy all your videos but this one was particularly interesting. I look forward to future videos that explore machine learning leveraging the power of the M3 Max. Fantastic stuff, Alex. Thank you!
@anthonyzheng7274
@anthonyzheng7274 5 місяців тому
You are awesome! This is great, I bought an M3 Max several days ago and really having a great time playing around with LLM's.
@catarinamoreira4805
@catarinamoreira4805 5 місяців тому
This is fantastic! Thank you so much! More content on LLMs, please!
@suburbanflyer
@suburbanflyer 5 місяців тому
Thanks for this Alex! Just got an M3 Max so it'll be great to try out some new things on it, this definitely looks interesting!
@SebastianWerner82
@SebastianWerner82 4 місяці тому
Great to see you creating videos with this type of content as well.
@JohnSmith762A11B
@JohnSmith762A11B 4 місяці тому
Excellent. Many thanks for putting this together! 🥂
@facepalmmute3619
@facepalmmute3619 5 місяців тому
the bass in your voice on the MBP speakers is phenomenal
@mr.w7803
@mr.w7803 5 місяців тому
Dang!! Dude, this video sold me on that M3 Max configuration… this is EXACTLY what I want to do on my machine
@tonbii
@tonbii 2 місяці тому
i bought M1 Max with 64GB 3 years ago to do this kind of works. I am so happy to find this video.
@ismatsamadov
@ismatsamadov 5 місяців тому
I subscribed a few months ago, but I have never seen such quality content. Thanks, Alex! Keep going.
@AZisk
@AZisk 5 місяців тому
thx 🙏
@juangarcia-wp2zr
@juangarcia-wp2zr 5 місяців тому
very cool content, thanks, I feel very curious now to try out some of this llms
@bawbee27
@bawbee27 3 місяці тому
Incredibly helpful - this is the video everyone with an Apple Silicon machine trying to do LLM’s should see!
@bdarla
@bdarla 5 місяців тому
Super helpful! I hope you will continue with further relevant videos!
@nikolamar
@nikolamar 5 місяців тому
Alex this is AWESOME!!! Thank you!
@_mansoor
@_mansoor 3 місяці тому
Awesome, Thank you. Halo Alex!!!🎉🎉
@RadAlzyoud
@RadAlzyoud 5 місяців тому
Brilliant. Thanks for sharing.
@joshgarzaBI
@joshgarzaBI 2 місяці тому
Awesome video here. I'm bummed I didn't do it sooner. I have never seen my M1 (16GB) freeze before. Great teaching here!
@estebanguillen8110
@estebanguillen8110 5 місяців тому
Great video, looking forward to the LLM fine-tuning video.
@jorgeluengo9774
@jorgeluengo9774 11 днів тому
Thank You Alex, this is an amazing video. I will look into the software development tools installation.
@scosee2u
@scosee2u 5 місяців тому
I really love your videos and how you explain these cutting edge concepts! Would you consider researching or interviewing someone to make a video about quantizing options and how it impacts using llms for coding? Thanks again for all you do!
@AZisk
@AZisk 5 місяців тому
Possibly!
@ChitrakGupta
@ChitrakGupta 5 місяців тому
That was really good. I learnt something and was fun to run on the new M3 Max
@LukeBarousse
@LukeBarousse 5 місяців тому
Interesting, I didn't know about LM Studio; that makes things A LOT cleaner
@JasonHorsnell
@JasonHorsnell 5 місяців тому
Just got myself an M3 Max and found your videos. You’ve saved me SO MUCH TIME….. Very much appreciated…..
@danieljohnmorris
@danieljohnmorris Місяць тому
How much ram?
@JasonHorsnell
@JasonHorsnell Місяць тому
⁠36GB max base. More than enough for my purposes atm.
@TimHulse
@TimHulse 21 день тому
Same here!
@XNaos
@XNaos 5 місяців тому
Finally, I waited for this
@MikeBtraveling
@MikeBtraveling 5 місяців тому
very interested in the topic and would love to see you do more in this space.
@jameshancock
@jameshancock 5 місяців тому
Nice! Thanks! FYI when you change the preset you’re changing how it inputs into the LLm. Which caused it to go nuts.
@camsand6109
@camsand6109 5 місяців тому
Glad i subscribed. you've been on a roll lately (new subscriber).
@SimoneFolador
@SimoneFolador 5 місяців тому
Thanks about the video man! I loved it and it helped me a lot since I wanted to try some models on my machine. What's your experience on fans on the M3 Max machine? I've read that they are pretty noisy and it becomes pretty hot as well. I still have an Intel machine (last generation) with 64GB ram and 2TB drive but i wanted to buy a new M3 max
@devdeal4146
@devdeal4146 2 місяці тому
Just got the m3 max with 48gb ram. Excited to see how it works with your tutorial. Thanks!
@geog8964
@geog8964 4 місяці тому
Thanks, Alex.
@user-kj4ik3qm9d
@user-kj4ik3qm9d 5 місяців тому
Thank you so much for making this video, it was really helpful. Please do more this kind of coding videos and testing on m3 macbook, and push them to the limits, I think you are the best channel for this because you have the knowledge and intention to do these things and it will be win win situation for both of us
@sujithkumar8261
@sujithkumar8261 5 місяців тому
Are you using macbook m3 base variant?
@Mrloganphillips1
@Mrloganphillips1 Місяць тому
I had so much fun with this project. I just got a m3max and wanted a project to work on. After I got llama running I made a bash script to run the command and trigger a second bash script to open a browser window to the ip address after a 5s delay to let the server get up and running first. then I made a shortcuts button to run it. now I have on demand llm with an easy to use on/off button.
@eldee8704
@eldee8704 3 місяці тому
Awesome tutorial! I bought the 14" MacBook Pro M3 Max base model for this to try out.. lol
@TimHulse
@TimHulse 21 день тому
That's great, thanks!
@pbdivyesh
@pbdivyesh 5 місяців тому
You're a good lad, thank you!🎉😅
@theoldknowledge6778
@theoldknowledge6778 5 місяців тому
This LM Studio is Lit 🔥
@amermoosa
@amermoosa 5 місяців тому
amazing. just shrinking the whole second grade of engineering college in 17 minutes. incredible 😊
@juliana.2120
@juliana.2120 5 місяців тому
ohh i love that you use conda here because it really helps me keep my hard drive clean with all those different AIs :D im an absolute beginner so i'm afraid of installing stuff i cant find later on. some people say its "outdated" and runs in errors too often but i cant really judge that. is that true?
@justisabelll
@justisabelll 5 місяців тому
Great video, really looking forward to the next few ML related ones. You might have had better results with LM studio though if you disabled mlock after enabling Metal GPU. Also the model output looks nicer if you enable markdown in the settings as well.
@yinoussaadagolodjo4549
@yinoussaadagolodjo4549 5 місяців тому
How to disable mlock ? Can find it !
@theperfguy
@theperfguy 5 місяців тому
I have to commend you for your effort. I havent seen any other reviewer showing any other usecase than media comsumption, synthetic benchmarks and video encoding and editing. You are perhaps the only youtuber I know who tries out other things like code compile time and ML workloads, which is what is going to run on majority of the high end machines.
@AZisk
@AZisk 5 місяців тому
Glad it was helpful!
@DivineZeal
@DivineZeal 2 місяці тому
Great video! Thinking about getting the MBP M3 for llm
@stephenthumb2912
@stephenthumb2912 5 місяців тому
thanks for testing. it's interesting that even with enough memory, still some slowness on the bigger model quants. my base M2 8gb can run the q4 7b's barely.... prefer ollama using cli which will run at usable tps. it's sort of ok with LM Studio, but generally I need to run 3b's or below with q4 quants. Orca-mini 3b is sort of the default test standard for me on 8gb mac's incl. the mac metal air. can confirm, using the mac metal checkbox, causes runaways. textgen funnily runs fine with mac metal suport as well.
@gargarism
@gargarism 5 місяців тому
I think the very first thing I will try out on my already ordered M3 Max, will be to follow what you did. The whole reason I bought the M3 Max is to work with machine learning. So thanks a lot!
@AZisk
@AZisk 5 місяців тому
Good choice!
@zt9233
@zt9233 5 місяців тому
@@AZiskis m3 max as good as nvidia for this?
@pec8377
@pec8377 5 місяців тому
@@zt9233 no it's not, unless you want to run large model that won't dit into nvdias cards, they Will Always beat M3 GPU. Maybe not when ANE IS activated, but none of thé tools présentes hère supports core ml
@MikeBtraveling
@MikeBtraveling 5 місяців тому
If you are looking for a laptop to work with LLMs on you cant really beat the Mac for models larger than 7bP and you want them to run locally@@zt9233
@joshbarron7406
@joshbarron7406 5 місяців тому
I would love to see a token/second benchmark between M2 Max and M3 Max. Trying to decide if should upgrade
@abhinav9058
@abhinav9058 4 місяці тому
Hey did you upgrade?
@SergeyZarin
@SergeyZarin 5 місяців тому
Thanks great video explaining !
@AZisk
@AZisk 5 місяців тому
Glad it was helpful!
@jigyansunanda
@jigyansunanda 5 місяців тому
looking forward to your training models video
@aimademerich
@aimademerich Місяць тому
Thank you for the GPU setting in LM Studio at 15:00!! Can you do more videos on proper GPU setup on LLM's for M1-3?
@kingmargie1182
@kingmargie1182 5 місяців тому
Great job!
@stanchan
@stanchan 5 місяців тому
The performance of the M3 is amazing. Waiting for the refreshed Studio, as the M3 Ultra will be a beast. Hoping it will have the 256GB RAM as predicted.
@saitaro
@saitaro 5 місяців тому
Thanks for the video, Alex! How does M3 Max compare to M2 Max for ML?
@kman41000
@kman41000 5 місяців тому
Awesome video man!
@AZisk
@AZisk 5 місяців тому
Glad you enjoyed it
@tomdonaldson8140
@tomdonaldson8140 5 місяців тому
Love it! Looking forward to the training video(s). Now I want a Mac Studio M3 Ultra! Oh, no such thing yet? Come on Apple! We’re waiting!!!
@mercadolibreventas
@mercadolibreventas 5 місяців тому
Kep it up! Good Job! Can you do a video on getting Llama Factory set up on the M3, Thanks!
@keithdow8327
@keithdow8327 5 місяців тому
Thanks!
@AZisk
@AZisk 5 місяців тому
🤩 thanks!
@MikeBtraveling
@MikeBtraveling 5 місяців тому
I bought a maxed out M3 max to do this, please run the larger models with ollama, when using LM studio you need to make sure you are using the correct prompt template for the model, i think that was your issue.
@salahidin
@salahidin 5 місяців тому
Yesss he did it!!!
@BenWann
@BenWann 9 днів тому
I couldn’t agree more - I wanted to really sink my teeth in ML since it’s been a while - and I bought a MBP m3 max after seeing your comparisons. Sorry I couldn’t use an affiliate code - micro center had a killer deal on it :(. I look for your videos to drop now, and look forward to what you come up with next.
@uninoma
@uninoma 3 місяці тому
cool thank you !!!!🤟
@JunYamog
@JunYamog 5 місяців тому
Thanks for this content, more dev tilt which is useful for me. I am contemplating on getting MBP after giving my old dev MBP to my niece. Based on your video it seems I would be best to buy as much ram as I can afford and roughly 30B models would need 32Gb shared ram. Possibly better than PC as limited by VRAM? Also I wonder how practical compared to using a cheaper MBP and renting out GPU cloud for the occasional big models? Got more budget constraints.
@Xilefx7
@Xilefx7 5 місяців тому
Can you test the LLM perfomance in low power mode? I believe Apple needs to optimize how they handle the thermals of the MacBook Pro with the m3 max.
@user-th8rb5gz3p
@user-th8rb5gz3p 5 місяців тому
Alex, thanks.
@AZisk
@AZisk 5 місяців тому
You bet!
@astrohgamingZero
@astrohgamingZero 24 дні тому
Looks good. I use text-generation-webui and the chat/chat-instruct modes or input presets can make or break some models.
@timelesscoding
@timelesscoding 3 місяці тому
Interesting stuff, I wish I could understand a little more. Thanks
@davidpsp89
@davidpsp89 5 місяців тому
super interesting and useful, I take this opportunity to ask about Matlab again and its real performance, since Apple's on its page is not real
@AliHussain-jh3iq
@AliHussain-jh3iq Місяць тому
Insightful video Planning to get a MacBook Pro M3 Max for LLM work. Should I go for 1TB or 2TB, 14 or 16-core CPU, and 64GB or 128GB RAM? Thanks for your insight!
@justingarcia500
@justingarcia500 5 місяців тому
Hey could you do a low battery mode test on the m3 max as you did with your m1 max a while back
@chillymanny714
@chillymanny714 5 місяців тому
This is a great video, I think if you were to make videos to teach intro/intermediate data analyst how to build LLMs or a series of videos to try different application creation using Macs M chips, that it would be a big hit. I will try to replicate your approach
@syedanas2083
@syedanas2083 5 місяців тому
I look forward to that
@marabgol
@marabgol 4 місяці тому
Thanks Alex! great videos watched 2 so far, do you have videos or plan to make videos how to fine-tune Llama2 models on Metal ?
@AZisk
@AZisk 4 місяці тому
Not yet! but considering digging more into this area on the channel
@innocent7048
@innocent7048 5 місяців тому
Very interesting article. I will try this :-)
@AZisk
@AZisk 5 місяців тому
🤩 thanks so much!
@jakubjan44
@jakubjan44 4 місяці тому
good stuff!
@abhinav23045
@abhinav23045 5 місяців тому
That fan noise is like feel the power of AGI.
@AZisk
@AZisk 5 місяців тому
😆
@user-wg3rr9jh9h
@user-wg3rr9jh9h 23 дні тому
Best LLM build video UKposts ❤. I’m buying my 36GB MacBook Pro M3 Max 14 Core cpu with 30 core GPU. Planning on launching a UKposts AI/Ml channel soon 🧐.
@juliana.2120
@juliana.2120 5 місяців тому
have you used localAI yet and would you recommend it if so? as far as i understand it uses the same API formatting like GPT so its working with a lot of already existing GPT tools
@Stewz66
@Stewz66 5 місяців тому
If you had the M3 Max, 128GP/4TB, and you wanted to do data analysis and visualization in python, which LLM would you use?
@radnaut
@radnaut 5 місяців тому
So very awesome 😎
@matthieuhenocque7824
@matthieuhenocque7824 5 місяців тому
Hey Alex, thank you very much for this very instructive video. I've been trying a bunch of local models to help me in my javascript development but couldn't find any that meet my needs. Do you have any recommendation to make? I'm looking for something able to convert ES5 code to ES6, and something able to replace jQuery with Vanilla Javascript. To be honest I have absolutely no clue how difficult it would be for LLM to process my demand, so don't be too harsh on me ^-^ Anyway, thanks a lot for your videos!
@AZisk
@AZisk 5 місяців тому
honestly i haven’t found a local model yet that works even nearly close to what chatgpt can do. but i haven’t done extensive testing with the larger models like 70B+
@christopherr8441
@christopherr8441 5 місяців тому
If only we could directly access and use the Apple Neural Engine for doing things like this. Imagine the speed and performance gains.
@user-ob7fd8hv4t
@user-ob7fd8hv4t 5 місяців тому
Is it the 96GB version of the M2 Max, what do you think, I want to deploy my own 13B model locally (train the model with some relatively sensitive data), or even become my 'digital clone', do you think the 38c 96GB M2 Max is a suitable choice?
@paulmiller591
@paulmiller591 2 місяці тому
Great video. Any chance you could revisit LM Studio now? Is it better supporting M3? I am considering swapping out my old Intel MacBook Pro and I do Generative AI development work.
@ergun_kocak
@ergun_kocak 5 місяців тому
3 to 5 times faster than M1 Max 64GM full spec. Thank you very much for the video 👍
@TheMetalMag
@TheMetalMag 5 місяців тому
it's written MBP 2 on your terminal on your MBP3? that's a big job again. You're well into that dev stuff. well done
@AZisk
@AZisk 5 місяців тому
yep. i have to properly name my machines :)
@francoispro3799
@francoispro3799 5 місяців тому
Thanks for the video Alex. As for me, my laptop at work is the famously loud MBP 16 intel i9. My personal machine is a 14" M3 Max 64Go. I am with the two laptops right now, and the 16" intel is louder than my 14" M3 max in my opinion . Maybe it's a 16" thing ...
@AZisk
@AZisk 5 місяців тому
when not under stress the intel will keep being loud and the silicon will be silent. but when fans hit over 3500rpm, the m3 max is louder than any other ones i heard
@brandall101
@brandall101 5 місяців тому
The main thing with the Intel machines is the GPU. Any moderate load will push it into chaos. With the Max you have to really push it hard - either high performance gaming or inference will do it.
@JS-ih4qi
@JS-ih4qi 5 місяців тому
@@AZisk I read that the 14” can throttle from the heat due to the smaller fans. Would this affect how fast an llm responds after it’s set up on the computer. I’m looking at the biggest M3 Max chip with 64 ram with 4tb. Appreciate any advice.
@hamiltonwmr189
@hamiltonwmr189 4 місяці тому
If you are going to do any intensive task on MacBook then keep it charged at 80% using al dante. Dont let run the models on battery as churning though cycles will damage it's help ,keep it on power adapter and 80% charging. I did some intensive training on my m1 Pro and it went from 100 to 96% battery health in 1 year.
@CitAllHearItAll
@CitAllHearItAll 2 місяці тому
4% loss in 1 year is normal. I'm at 2+ years on M1 Pro with 86% battery health. You're either trippin or trollin.
@aalhaimi
@aalhaimi Місяць тому
Alex, thanks so much for this.. Quick Question: When running the batched-bench command, I noticed that there are no benchmarks printed under the corresponding table. The table is coming out empty. Everything else seems good.. Any idea why?
@ericadar
@ericadar 5 місяців тому
I dont have a local machine with the right specs. What do you recommend for running studio LLM on a cloud instance?
@anastassogoldschmied
@anastassogoldschmied 5 місяців тому
Can you run LLMs like Llama or Mistral on the Apple Silicon NPU? I think it was possible with stable-diffusion but that is a completely different thing.
@randomdude6205
@randomdude6205 5 місяців тому
Awesome video! Any comments on how does it compare with some modern Nvidia GPU? Also, what about training times for small-ish models?
@tacorevenge87
@tacorevenge87 4 місяці тому
Training models locally isn’t efficient even if you do on a pc with the latest nvdia. Why not do it on the cloud ?
@TarunKumar-yf8sj
@TarunKumar-yf8sj 5 місяців тому
Can you please tell how good will these LLMs manipulation with be carried out by M3 Pro Macbook Pro 16 inch.
@AmpiroMax
@AmpiroMax 5 місяців тому
Please, can you compare tokens generation speed of any llama-like model on m3, m3 pro, and m3 max?
@redgenAI
@redgenAI 3 місяці тому
@Alex Should the m3 max with 128gb be able to run a 70b model? And what do you think is the largest model it could fine tune with qlora?
@DavidCampero26
@DavidCampero26 4 місяці тому
Hi Alex! I would love to see a comparison between M3 Max 14/30 and M3 Max 16/40 with the same processes for LLMs. I read that many people is going with the base model M3 Max and I would like to see how much difference there is. If you know of someone who did it, please let me know!! I want to buy a laptop as soon as possible!! Thanks!!
@Renzsu
@Renzsu Місяць тому
Hey Alex, can you please do a video on Stable Diffusion on Mac? I'm on the fence on getting one.. the shared memory is tempting, it allows for more than using a discreet GPU. But I wonder how the speed is..
@donaldzou1911
@donaldzou1911 5 місяців тому
Hi Alex, thank you so much for providing this! I tried Ollama on my new M3 Max, whenever it is giving me the result, I can hear a coil hissing sound coming out from where the CPU is.. Wondering if you have the issue too?
@toddturner6
@toddturner6 5 місяців тому
It's called a cooling fan.
@hksjacky1
@hksjacky1 5 місяців тому
Hi Alex, what is the app that monitor your cpu temperature?
@muhammadyounis7090
@muhammadyounis7090 8 днів тому
Hi Alex, thanks for the great content. currently I'm planning to buy new macbook M3 Max for AI work, and I'm hesitating between M3 Max 14/30 with 96GB Ram and 16/40 with 64 GB Ram, they both have similar price tag so I'm not sure to go for the extra 10 GPU cores or extra 32GB of Ram. note that I'm not a pro user (yet) but I want to be able to run locally on new models, train or fine tune my own models with ease, and of course I plan to have the machine with me for 5 years to come.. what should I choose? thank you!
@01_abhijeet49
@01_abhijeet49 18 днів тому
These models run soooo well on my 3060 rtx desktop. Alas, my investment is worth it
FREE Local LLMs on Apple Silicon | FAST!
15:09
Alex Ziskind
Переглядів 55 тис.
Apple M3 Max MLX beats RTX4090m
10:24
Alex Ziskind
Переглядів 55 тис.
😱СНЯЛ СУПЕР КОТА НА КАМЕРУ⁉
00:37
OMG DEN
Переглядів 1,8 млн
Voloshyn - ЗУСИЛЛЯ (прем'єра треку 2024)
06:17
VOLOSHYN
Переглядів 786 тис.
Set up a Mac in 2024 for Power Users and Developers
1:00:34
Syntax
Переглядів 193 тис.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Переглядів 367 тис.
Use This AI Code Editor Instead
6:35
Catalin Pit
Переглядів 20 тис.
How I Made AI Assistants Do My Work For Me: CrewAI
19:21
Maya Akim
Переглядів 646 тис.
Run your own AI (but private)
22:13
NetworkChuck
Переглядів 1 млн
it begins... users leave Stack Overflow
5:42
Alex Ziskind
Переглядів 18 тис.
7 Amazing CLI Tools You Need To Try
18:10
Josean Martinez
Переглядів 134 тис.
Cheap vs Expensive MacBook Machine Learning | M3 Max
11:06
Alex Ziskind
Переглядів 74 тис.
Zed “kills” VSCode
12:10
Alex Ziskind
Переглядів 441 тис.
Внутренности Rabbit R1 и AI Pin
1:00
Кик Обзор
Переглядів 721 тис.
Which Phone Unlock Code Will You Choose? 🤔️
0:12
Game9bit
Переглядів 6 млн
Android top🔥
0:12
ARGEN
Переглядів 598 тис.