Proxmox 8 Cluster with Ceph Storage configuration

  Переглядів 78,723

VirtualizationHowto

VirtualizationHowto

День тому

Are you looking to setup a server cluster in your home lab? Proxmox is a great option along with Ceph storage. In this video we take a deep dive into Proxmox clustering and how to configure a 3-node Proxmox server cluster with Ceph shared storage, along with Ceph OSDs, Monitors, and Ceph storage Pool. In the end I test a migration of a Windows Server 2022 virtual machine between Proxmox nodes using the shared Ceph storage. Cool stuff.
★ Subscribe to the channel: / @virtualizationhowto
★ My blog: www.virtualizationhowto.com
★ Twitter: / vspinmaster
★ LinkedIn: / brandon-lee-vht
★ Github: github.com/brandonleegit
★ Facebook: / 100092747277326
★ Discord: / discord
Introduction to running a Proxmox server cluster - 0:00
Talking about Promox, open-source hypervisors, etc - 0:48
Thinking about high-availability requires thinking about storage - 1:20
Overview of creating a Proxmox 8 cluster and Ceph - 2:10
Beginning the process to configure a Proxmox 8 cluster - 2:24
Looking at the create cluster operation - 3:03
Kicking off the cluster creation process - 3:25
Join information to use with the member nodes to join the cluster - 3:55
Joining the cluster on another node and entering the root password - 4:15
Joining the 3rd node to the Proxmox 8 cluster - 5:13
Refreshing the browser and checking that we can see all the Proxmox nodes - 5:40
Overview of Ceph - 6:11
Distributed file system and sharing storage between the logical storage volume - 6:30
Beginning the installation of Ceph on the Proxmox nodes - 6:52
Changing the repository to the no subscription model - 7:30
Verify the installation of Ceph - 7:51
Selecting the IP subnet available under Public network and Cluster network - 8:06
Looking at the replicas configuration - 8:35
Installation is successful and looking at the checklist to install Ceph on other nodes - 8:50
The Ceph Object Storage Daemon (OSD) - 9:27
Creating the OSD and designating the disk in our Proxmox hosts for Ceph - 9:50
Selecting the disk for the OSD - 10:15
Creating OSD on node 2 - 10:40
Creating OSD on node 3 - 11:00
Looking at the Ceph dashboard and health status - 11:25
Creating the Ceph pool - 11:35
All Proxmox nodes display the Ceph pool - 12:00
Ceph Monitor overview - 12:22
Beginning the process to create additional monitors - 13:00
Setting up the test for live migration using Ceph storage - 13:30
Beginning a continuous ping - 14:00
The VM is on the Ceph storage pool - 14:25
Kicking off the migration - 14:35
Only the memory map is copied between the two Proxmox hosts - 14:45
Distributed shared storage is working between the nodes - 15:08
Nested configuration in my lab but still works great - 15:35
Concluding thoughts on Proxmox clustering in Proxmox 8 and Ceph for shared storage - 15:49
Proxmox 8: New Features and Home Lab Upgrade Instructions:
www.virtualizationhowto.com/2...
Proxmox 8 and Ceph:
www.virtualizationhowto.com/2...
Top VMware vSphere Configurations in 2023:
www.virtualizationhowto.com/2...

КОМЕНТАРІ: 107
@davidefuzzati8249
@davidefuzzati8249 9 місяців тому
That's how a tutorial should be done! Thoroughly explained and step-by-step detailed!!! THANK YOU SO VERY MUCH!!
@IamDmitriev
@IamDmitriev 8 місяців тому
Yes and it sounds strange, but this step-by-step instruction from one side, but realy helped to understand logic of proxmox and ceph from the other side.
@naami2004
@naami2004 10 місяців тому
The best Proxmox & Ceph tutorial, thank you.
@VirtualizationHowto
@VirtualizationHowto 10 місяців тому
@naami2004, awesome! Thank you for the comment and glad it was helpful.
@substandard649
@substandard649 5 місяців тому
This was a perfect tutorial, watched it once, built a test lab, everything worked as expected.
@VirtualizationHowto
@VirtualizationHowto 4 місяці тому
Awesome @substandard649, glad it was helpful! Be sure to sign up on the forums and I can give more personalized help here: www.virtualizationhowto.com/community
@pg6525
@pg6525 19 днів тому
Best detailed HOW to video in Proxmox universe...
@jburnash
@jburnash 6 місяців тому
Thank you! This was incredibly helpful with my setting up Ceph for the first time and showed all the details necessary to better understand it and test that it was working!
@souhaiebbkl2344
@souhaiebbkl2344 8 місяців тому
You have got a new subscriber. Awesome tutorial.
@sking379
@sking379 9 місяців тому
We definitely love the content, we appreciate your attention to detail!!!
@michaelcooper5490
@michaelcooper5490 8 місяців тому
Great Video sir, I appreciate the work you put in. It is well explained. Thank you.
@JasonsLabVideos
@JasonsLabVideos 10 місяців тому
Good video sir, i played with this with a few Lenovo Mini machines and loved it !!
@cberthe067
@cberthe067 8 місяців тому
Great tutorial ! I'm planning to buy some old thinclient (ryzen A10) to test this proxmox 8 ceph config !
@dronenb
@dronenb 9 місяців тому
Wow, this is an excellent tutorial. Thanks!
@felipemarriagabenavides
@felipemarriagabenavides 9 місяців тому
Advice: in production environments use 10Gbps links on all servers, or else a "bottleneck" is generated if the disks are running at 6Gbps speed
@cafeaffe3526
@cafeaffe3526 Місяць тому
Thanks for this awesome tutorial. It was easy to understand, also for an non native english speaker.
@junialter
@junialter 6 місяців тому
that was a perfect introduction. Thank you.
@samegoi
@samegoi 10 місяців тому
Ceph is incredible nice distributed object storage solution which is open source. I need to check it out myself
@achmadsdjunaedi9310
@achmadsdjunaedi9310 9 місяців тому
The best tutorial for clustering, 😊thank you sir....We will try it on three server devices, to be applied to the Republic of Indonesia radio data center...
@IvanPavlov007
@IvanPavlov007 25 днів тому
Did it work?
@user-xn3bt5mz1x
@user-xn3bt5mz1x 7 місяців тому
Great video and thoroughly detailed. My only advice for properly monitoring a "migrating VM" would be to send a ping to the 'Migrating VM' from a different machine/VM. When doing anything from the VM being migrated, the process will pause in order to be transferred over to the new host (thus not showing any dropped packets "from" the VM's point of view). Keep up the good work!
@VirtualizationHowto
@VirtualizationHowto 7 місяців тому
Thank you @user-xn3bt5mz1x good point. Thanks for your comment.
@youNOOBsickle
@youNOOBsickle 10 місяців тому
I’ve been planning to move from VMware & VSAN to “Pmox” :) & ceph for a while now. I just need the time to set everything up and test. I love that you virtualized this first! My used storage is about 90% testing vm’s like these. 🤷‍♂️
@nicklasring3098
@nicklasring3098 Місяць тому
That's really cool! Thanks for the vid
@AlejandroRodriguez-wt2mk
@AlejandroRodriguez-wt2mk 7 місяців тому
nicely done, subscribed now.
@dimitristsoutsouras2712
@dimitristsoutsouras2712 6 місяців тому
At 3:26 it would be useful to be mentioned the fact that ceph and HA would be highly benefit from a different network for their exchange data procedures. There would be the point where you should choose a different network (in case of course there is one to choose from) from the managed one. Yes it will work with the same network for everything but it won t be as performant as with a dedicated one. New edit: Stand corrected by 8:26 where you do mention it.
@EViL3666
@EViL3666 2 місяці тому
Thank you, that was very informative spot-on... One thing I did pickup, and this is my "wierdness", you might be trying a little to hard with the explicit descriptions. For example, the migration testing, you explicity call out the full hostnames several times - at this stage in the video, viewers are intimately familiar with the servers, so stating "server 1 to 2" would feel more natural.
@IvanPavlov007
@IvanPavlov007 25 днів тому
Could go both ways - as a newbie I appreciate the explicit details as it’s exactly when presenters start saying generic-sounding “first box” or “the storage pool” is where I often get lost!
@marcusrodriguesadv
@marcusrodriguesadv 10 місяців тому
Great content, shoutoutz from Brazil...
@ierosgr
@ierosgr 9 місяців тому
Nice presentation and explanation of some key core steps of the procedure. Yet you omit to mention that -nodes should be the same from a h/w perspective, specially when VMs running are Win Servers since you could easily loose your License just by transferring it to a different node with different h/w specs -even if someone, might get it from just pausing the video and noticing that the 3 storages are the same on all the 3 nodes, a mention of that wouldn't t hurt. -finally a video like this, could be a nice start for several other ones, about maintaining and troubleshooting a cluster with ceph since usual stuff like a node went down for good or went down for a long time since parts need to be ordered in order to be fixed, (this will have as a result several syslog messages flood the user and you might want to show how to stop them or suppress them until the node gets fixed again)....etc
@Meowbay
@Meowbay 9 місяців тому
This is bullcrap. I have proxmox running on 3 tiny PC's (TRIGKEY, Intel, and an older mini-PC board), all 3 of them were once licensed for Windows 7, 10 and 11, I've transferred all their activations to my Microsoft's cloud account, which is essentially done just by/when activating and having logged in using a MS account. I then installed proxmox and erased the 3 machines. They even have different sized boot-SSD's, proxmox and ceph don't give a rat's ass. I can easily run/create a Win11 VM and transfer it without issues between the 3. Microsoft has all 3 hardware images in its database, so it's all fine with the OS moving from one to the other.
@ierosgr
@ierosgr 9 місяців тому
@@Meowbay Nice but give it a try with Windows Server Licences not plain OSes. You mentioned you gave it a try with Win 7 10 11. I stated Win Server OSes. I was at phone with Microsoft over an hour and rhey couldnt even give me a straight answer if the licence would be maintained or not after migration. Finally, I was talking about production environments where having the knowledge of what will happen is mantatory, and not homelab.
@sesvetski
@sesvetski 10 днів тому
Great video. Thank you!
@substandard649
@substandard649 3 місяці тому
This is great. I would love to see a realworld homelab version using 3 mini PCs and a 2.5gb switch. I think there are a lot of users like me running home assistant in a proxmox vm along with a bunch of containers for cctv / dns etc. There are no videos covering this ceph scenario and I need a hero 😊
@VirtualizationHowto
@VirtualizationHowto 3 місяці тому
@substandard649 sign up and join the VHT forums here and we can discuss any questions you have in more detail: www.virtualizationhowto.com/community
@parl-88
@parl-88 10 місяців тому
Wonderful Video! Thanks for your time and detailed explanations. I just found your YT channel and I am loving it so far.
@VirtualizationHowto
@VirtualizationHowto 10 місяців тому
Awesome @pedroandresiveralopez9148! So glad to have you and thank you for your comment. Also, join up on the Discord server, I am trying to grow lots of good discussions here: discord.gg/Zb46NV6mB3
@MAD20248
@MAD20248 6 місяців тому
thank you so much, i think now i clearly understand how the storage requirements works. but what about the cpu/ ram sharing? I'm planning to build a cluster with enough storage and run vm on each one of then and fully utilize the hardware on each one of them, i don't know how the cluster is gonna be have when one of the nodes fail or should i spare some ram/cpu
@MrRoma70
@MrRoma70 9 місяців тому
Nice work, CEPH is really good, although I moved a VM from a different disk to the Pool and did not migrate seamless, never less I like the idea, can you make a video to show how to use CEPH in HA, Thank you
@arthurd6495
@arthurd6495 3 місяці тому
Thanks. good stuff.
@djstraussp
@djstraussp 5 місяців тому
Nice video, I'm planning upgrading to Proxmox CEPH Cluster this holidays. Promptly result from YT algorithm. BTW, that nested Cluster under Vsphere....😮
@VirtualizationHowto
@VirtualizationHowto 4 місяці тому
@djstraussp Thank you for the comment! Awesome to hear.....also sign up for the forums, would like to see how this project goes: www.virtualizationhowto.com/community
@rahilarious
@rahilarious 10 місяців тому
please make ceph cluster tutorial on non-proxmox distribution
@cesarphilippakis350
@cesarphilippakis350 7 місяців тому
Which is better, VMware or Proxmox? I have 3 nodes with 4 SSDs each, and all three have 10GB NICs. But for a high-performance high-availability environment, which is the better option, especially when it comes to VM performance with Windows? In your experience, is Proxmox with Ceph better, or VMware with vSAN?
@resonanceofambition
@resonanceofambition 6 місяців тому
bro this is so cool
@milocheri
@milocheri Місяць тому
Hi, it is the best tutorial i ever see so far on youtube, it is complete, however i have a question, since you said you are running each proxmox in Virtualbox, how did you manage to create a vm and not get the error message "KVM virtualisation configured, but not available" thank you for your help!
@PaulKling
@PaulKling 5 місяців тому
FYI when clicking in CMD window it changes the title to select and the process running in CMD window pauses. For most of the demo it was in selection mode (paused ping command) it would be interesting to see how it worked without the select. Otherwise loved the demo and Ceph storage setup exactly what I was looking for.
@VirtualizationHowto
@VirtualizationHowto 4 місяці тому
@PaulKling awesome! Thank you for the comment! Be sure to sign up on the forums: www.virtualizationhowto.com/community
@dwieztro6748
@dwieztro6748 8 місяців тому
what happen if pmox1(admin cluster) has crash and can't up again? and what if i re install pmox1?
@jamhulk
@jamhulk 10 місяців тому
Awesome! How about proxmox plus SAN storage?
@Renull55
@Renull55 9 місяців тому
I don't have storage in the osd step how do I create?
@2Blucas
@2Blucas 28 днів тому
Hi, thanks for your great content, simple well explained. "Regarding Proxmox VE's High Availability features, if I have a critical Microsoft SQL Server VM, will the system effectively handle a scenario where one PVE node crashes or if there's a need to migrate the VM to another PVE? Specifically, I'm concerned about the risk of losing transactions during such events. How does Proxmox ensure data integrity and continuity for database applications like SQL Server in high availability setups?"
@khamidatou
@khamidatou 2 місяці тому
Geat tutorial, thanks for sharing
@VirtualizationHowto
@VirtualizationHowto 2 місяці тому
Thanks for watching!
@---tr9qg
@---tr9qg 9 місяців тому
Can't wait when proxmox dev team decided to deploy fault tolerance functionality into their product. It would be cool.
@AlienXSoftware
@AlienXSoftware 2 місяці тому
Great video, it is a shame you had the command prompt window in "selected" mode when you did the demo of live migration, as this would have paused the pings though, but neat none the less.
@subhajyotidas1609
@subhajyotidas1609 Місяць тому
Thanks for very clear and concise tutorial. I had one question though. As the 'pool' is shared by three nodes, will it be possible to make the VM auto migrate to another host if one host goes down abruptly?
@VirtualizationHowto
@VirtualizationHowto Місяць тому
@subhajyotidas1609 Thank you for the comment! Yes, the Ceph storage pool acts like any other shared storage once configured. You just need to setup HA for your VMs and if a host goes down, the heartbeat timer will note the host is down and another Proxmox host will assume ownership of the VM and it will be restarted on the other host. Hit me up on the forums if you have any other questions or need more detailed explanations. Thanks @subhajyotidas1609 ! www.virtualizationhowto.com/community
@souhaiebbkl2344
@souhaiebbkl2344 8 місяців тому
One question. Do we need to have the same shared storage space accros all nodes for ceph to work properly ?
@IvanPavlov007
@IvanPavlov007 25 днів тому
I have the same question - can I make one physical server have much larger storage (eg via external HBA/SAS 12*3.5” enclosure) than others, to use as extra file storage?
@Ogk10
@Ogk10 18 днів тому
Would this work with multiple location? One environment at home and one at my parents for ha and universal setup/ease of use?
@visheshgupta9100
@visheshgupta9100 8 місяців тому
I am planning on deploying multiple Dell R730XD in homelab environment. Was looking for a storage solution / NAS. Would you recommend using TrueNAS or CEPH? Can we create SMB / iSCSI shares on a CEPH cluster? How to add users / permissions?
@visheshgupta9100
@visheshgupta9100 8 місяців тому
Also, in the present video, you've added just 1 disk per node. How can we scale / expand our storage? Is it as simple as plugging in new drives and adding it to the OSD? Do we need to add the same amount of drives in each node?
@youssefelankoud6497
@youssefelankoud6497 4 дні тому
Don't forget to give us your feedback if you used CEPH, and how it is worked ?
@valleyboy3613
@valleyboy3613 2 місяці тому
great video. do the ceph disks on each node need to be the same size?? I have 2 Dell servers and was going to run a mini micro PC as the 3rd node with 2TB in each of the Dells but 1TB in the Dell mini PC. would that work?
@VirtualizationHowto
@VirtualizationHowto 2 місяці тому
@valleyboy3613 thank you for the comment. See the forum thread here: forum.proxmox.com/threads/adding-different-size-osd-running-out-of-disk-space-what-to-look-out-for.100701/ as it helps to understand some of the considerations. Also, create a new Forum post on the VHT Forums if you need more detailed help: www.virtualizationhowto.com/community
@markstanchin1692
@markstanchin1692 3 місяці тому
Hello, great video I was able to follow along. Question, what’s the difference between a cluster like this in proxmox and a kubernetes is K3 set up in proxmox trade off the benefits of one or the other, etc. Also, could you list some examples on what possible use scenarios and configurations etc. Thanks.
@VirtualizationHowto
@VirtualizationHowto 3 місяці тому
@markstanchin1692 thank you for the comment! Sign up on the VHT forums here and let's discuss it in more detail: www.virtualizationhowto.com/community
@pg6525
@pg6525 19 днів тому
One question: If i add a disk to the CEPH pool its formated to 0 or the data keeped? Thank you
@fbifido2
@fbifido2 10 місяців тому
2. why u did not show total storage of pool? can we add more storage later? how to set that up?
@frandrumming
@frandrumming 3 місяці тому
Its cool 😎
@igorpavelmerku7599
@igorpavelmerku7599 Місяць тому
Interesting ... adding the second node gets into the cluster, but stays red (like unavailable); when trying to add the third node I get a "An error occurred on the cluster node: cluster not ready - no quorum?" error and the cluster join aborts. I have reinstalled from scratch all three nodes a couple of times, I have removed cluster and redone over and over again to no avail. Not working my side ...
@gbengadaramola8581
@gbengadaramola8581 4 місяці тому
Thank you!! An insightful video, can I configure a cluster and CEPH storage over 3 datacenter without a dedicated network link, only over the internet.
@VirtualizationHowto
@VirtualizationHowto 4 місяці тому
@gbengadaramola8581 Thank you for the comment, please sign up on the VHT forums and we can discuss it further: www.virtualizationhowto.com/community
@tariq4846
@tariq4846 3 місяці тому
I have the same question
@kjakobsen
@kjakobsen 8 місяців тому
Thats funny. I have always heard, you couldn't do live migrations on a nested hypervisor setup.
@bioduntube
@bioduntube 4 місяці тому
will this process work for Virtual Environment 6.2-4?
@KevinZyu-iz7tn
@KevinZyu-iz7tn 9 місяців тому
nice tutorial. thanks. Is it possible to attach an external Ceph pool to Proxmox cluster?
@troley1284
@troley1284 7 місяців тому
Yes, you can mount external RBD or CephFS to Proxmox.
@bioduntube
@bioduntube 3 місяці тому
thanks for the video. I am trying to set up Clustering and Ceph on nodes that have previously been configured. I have succeeded with Clustering. However, Ceph was installed but when I try to set up OSD, I get the error "Ceph is not compatible with disks backed by a hardware RAID controller". My ask is what can I do to remedy this?
@VirtualizationHowto
@VirtualizationHowto 3 місяці тому
@bioduntube thank you for the comment! Hit me up on the forums with this topic and let's discuss it further www.virtualizationhowto.com/community
@bash-shell
@bash-shell 10 місяців тому
Your videos are great help. PS. I THINK light mode for tutorials would be better seeing details.
@acomav
@acomav 10 місяців тому
Totally agree. Dark mode may be the personal preference for a majority of people in day to day work on their own screen, but for UKposts videos, you should use light mode. Love your content.
@KingLouieX
@KingLouieX 3 місяці тому
My storage added to node 1 works fine but when I try to add the OSD to the other nodes it states no disks available.. Can the other 2 nodes share the USB drive connected to Node 1?? Or does the other 2 nodes need their own unused storage in order for Ceph to work? thanks.
@VirtualizationHowto
@VirtualizationHowto 3 місяці тому
@KingLouieX thank you for the comment! Sign up on the forums and create a new topic under "Proxmox help" and let's discuss this further: www.virtualizationhowto.com/community
@fbifido2
@fbifido2 10 місяців тому
1. what about host with more than 1 hdd/ssd? what should they do in the OSD part?
@frzen
@frzen 9 місяців тому
One osd per disk for spinning disk and one nvme / ssd can be used as wal for multiple osd I think
@fbifido2
@fbifido2 10 місяців тому
3. Can we upgrade the size of the Ceph disk, eg: from a 50GB to a 1TB, if the 50GB is about to get full? 3a. How does one know the free space on ech host if the HDD is in a Ceph pool?
@VirtualizationHowto
@VirtualizationHowto 10 місяців тому
@fbifido2, thanks for the comments and questions. Hop over to the Discord server and we can have more detailed discussions there: discord.gg/Zb46NV6mB3
@pivot3india
@pivot3india 10 місяців тому
what happens if one of the server fails in the cluster ? The virtual machine keeps running on another server (fault tolerance) or there is failover ?
@samstringo4724
@samstringo4724 9 місяців тому
Yes if you set up High Availability (HA) in the Proxmox UI.
@chenxuewen
@chenxuewen 10 місяців тому
good
@SataPataKiouta
@SataPataKiouta 5 місяців тому
Is it a hard requirement to have 3 nodes in order to form a functional PVE cluster?
@VirtualizationHowto
@VirtualizationHowto 4 місяці тому
Thank you for the comment! Sign up on the forums and I can give more personalized help here: www.virtualizationhowto.com/community
@davidgrishko1893
@davidgrishko1893 8 місяців тому
4:21 - I don't think that's an encrypted stream. That just looks like base64 encoded information.
@43n12y
@43n12y 7 місяців тому
thats what it is
@hagner75
@hagner75 7 місяців тому
Love your video. However, I'm a bit disappointed in you. You made your nested Proxmox on a VMware ESXi setup. That should've been Proxmox :P Good job nonetheless.
@cheebadigga4092
@cheebadigga4092 10 місяців тому
So Ceph is "just" HA? Meaning, all nodes in the cluster basically see the same filesystem?
@MikeDeVincentis
@MikeDeVincentis 10 місяців тому
Sort of but not really. Ceph is distributed storage across the cluster using dedicated drives for OSD's with a minimum of 3 nodes. You have to have a cluster before you build the storage, and you have to have drives installed in the nodes to build the ceph cluster. Data is distributed across the nodes so they are readily available if a node or drive / osd fails. You then have the option of turning on HA for the vm's so they can always be available on top of the data.
@cheebadigga4092
@cheebadigga4092 10 місяців тому
@@MikeDeVincentis Thanks for the explanation. However I still don't really understand. Does "distributed" mean, that each node has an exact replica of a given data set? Like a mirror? Or is it more like a RAID 0?
@MikeDeVincentis
@MikeDeVincentis 10 місяців тому
@@cheebadigga4092 more like raid 10. 3 copies of the data blocks spread across the nodes. Think raid but spread across multiple devices, not just drives inside one system.
@cheebadigga4092
@cheebadigga4092 10 місяців тому
@@MikeDeVincentis ahhh thanks!
@JohnWillemse
@JohnWillemse 9 місяців тому
Please have a look at Wazuh - Het Open Source Security Platform met Security Information and Event Management (SIEM) Regards John 🤗
@niravraychura
@niravraychura 10 місяців тому
Very good tutorial.. But I have a question.. What kind of bandwidth you should have to use ceph.. I mean to ask is a gigabit is enough or one should use 10gig?
@VirtualizationHowto
@VirtualizationHowto 10 місяців тому
@niravraychura, thank you for the comment! Hop over to my Discord server to discuss this further either in the home lab discussion section or home-lab-pics channel: discord.gg/Zb46NV6mB3
@nyanates
@nyanates 8 місяців тому
If you're going to get serious about it you should have a 10G link and a dedicated Ceph network. Get a HW setup with 2x nics in it so one of them can be dedicated to the Ceph network.
@niravraychura
@niravraychura 8 місяців тому
@@nyanates thank you for the answer 😇
k0s vs k3s - Which is best for home lab?
14:20
VirtualizationHowto
Переглядів 25 тис.
船长被天使剪成光头了?#天使 #小丑 #超人不会飞
00:28
超人不会飞
Переглядів 13 млн
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Переглядів 2,5 млн
Best Free Hypervisors in 2024
12:16
VirtualizationHowto
Переглядів 60 тис.
Homelab Project - Proxmox High Availability Cluster with Ceph
31:07
Sonoran Tech
Переглядів 4,2 тис.
Perfect 1L Homelab in 3 Upgrade Tiers Project TinyMiniMicro
17:50
ServeTheHome
Переглядів 755 тис.
More POWER for my HomeLab! // Proxmox Cluster
17:49
Christian Lempa
Переглядів 50 тис.
Proxmox 10 tweaks you need to know
9:48
VirtualizationHowto
Переглядів 26 тис.
HomeLab Services Tour 2024 - What Am I Self Hosting?
40:00
Techno Tim
Переглядів 324 тис.
This web UI for Ansible is so damn useful!
20:07
Christian Lempa
Переглядів 436 тис.
Микро упаковка #shorts
0:42
Самоса Бой
Переглядів 3,8 млн
Я ВАС РАЗЫГРАЛА 😂#shorts
0:14
Ekaterina Kawaicat
Переглядів 12 млн
Такого покрытия для вашего помещения вы ещё не видели
0:22
Дизайнерское Логово
Переглядів 3,4 млн
When your lunchboxes are empty)
0:20
Cali Momma
Переглядів 12 млн