What is x86-64-v3? Understanding the x86-64 microarchitecture levels

  Переглядів 58,409

Gary Explains

Gary Explains

2 місяці тому

Those who follow the Linux related news will have noticed the term x86-64-v3 being used recently. What is x86-64-v3? What is all the fuss? What is its relationship to Linux? Is it important? Let's find out!
---
Let Me Explain T-shirt: teespring.com/gary-explains-l...
Twitter: / garyexplains
Instagram: / garyexplains
Some stock footage from: www.vecteezy.com/free-videos/...
#garyexplains

КОМЕНТАРІ: 209
@questionlp
@questionlp 2 місяці тому
Two nits: i686 was introduced in the Pentium Pro; and, you do find a lot of Intel Atom processors in embedded products like NAS devices, network firewalls, control planes for switches. While some of those are moving to Xeon D or newer Atom processors with Gracemont or newer Atom cores, requiring v3 will be a tougher sell for those markets than for the consumer markets.
@GaryExplains
@GaryExplains 2 місяці тому
Indeed, but the makers of NAS devices, network firewalls, etc, generally make their own Linux distros and aren't reliant on RHEL or SUSE or whoever.
@GaryExplains
@GaryExplains 2 місяці тому
Also, just to nit back, I didn't say that the 686 was introduced with the Pentium II, I said that other microarchitectures followed the 386, including the i686 which is in the Pentium II. It was also used in the P3 and various Celerons (like the Covington ones etc).
@olokelo
@olokelo 2 місяці тому
Yeah, not only Atoms but also newer Pentiums and Celerons (like j4125) don't support avx while being perfectly good cpus. I believe these low-power lineups started being v3 compatible around tiger-lake which is just 3 years old now.
@geoffstrickler
@geoffstrickler 2 місяці тому
Right, V2 or even V1 is more than sufficient for most embedded uses. Many of them don’t even get significant benefits from X64, they can run 32-bit and still do everything needed.
@wile123456
@wile123456 2 місяці тому
Atom cores are so garbage and slow and not even that effecient. They are like an ARM or MIPS core from 20 years ago lol
@modolief
@modolief 2 місяці тому
4:11 - Main content starts here
@johnpaulbacon8320
@johnpaulbacon8320 2 місяці тому
Thanks for this well done and informative video.
@bazoo513
@bazoo513 2 місяці тому
9:00 - Exactly - one of Linux call to fame has always been suport for ancient hardware, giving second or third life to what would otherwise be hard to recycle waste.
@noergelstein
@noergelstein 2 місяці тому
At my company we have committed internally to support our appliances for 10 years after the last unit is sold, which includes a a lot of devices with 32 bit (Arm) Linux. That will be for at least until 2041, but new 32 bit devices are still going to come to the market.
@autarchprinceps
@autarchprinceps 2 місяці тому
There is another reaason v3 support as requirement will get harder: Most office Linux users won't actually performance wise require to update their CPUs beyond a pre v3 point, as the demand on the hardware hasn't increased much outside of gaming in recent years. I can certainly name a few in my surroundings for who this is true. For example, my mother's PC is still running perfectly fine with my old first gen quad core Core i7 860 & 8 GB of RAM. As long as you have a SSD as boot volume, even Windows isn't really an issue performance wise. Yes, gaming is a whole different story, but as long as you don't do that, anything a desktop will do for you, will still run great. Servers will update, I'm sure, and occasionally failed hardware will be replaced new, but nothing like the rate that you had in prior generations with real requirement increases demanding hardware updates. And if they are upgrading, at least for some, x86 won't be the target architecture. Already true for som e Chromebooks and all new Macs, as well as Qualcomm based Windows on ARM devices, which will get a lot better when the new Snapdragon X Elite hits shelves. Also any Neoverse based servers, and many SBC's & NAS systems that might run Linux on ARM instead as well.
@DavidAlsh
@DavidAlsh 2 місяці тому
Would be cool to see a comparison in gaming between a default kernel and the same distro using a kernel compiled to target v4
@ckingpro
@ckingpro 2 місяці тому
@@DavidAlshyou wouldn’t see much difference. The game has to target AVX512. That said, many games do support AVX2
@OgbondSandvol
@OgbondSandvol 2 місяці тому
Perfectly fine with a first gen I7? 😊 That's a huge processor! We're in 2024 and I run Win10 and Ubuntu perfectly fine in a 2006' Core2Duo, with 8GB and SSD...
@autarchprinceps
@autarchprinceps 2 місяці тому
@@ckingpro Yes, and applications can target that optionally. They don’t need to limit support to that to get the benefit on supported processors. That being said vector instructions are in a weird limbo anyway. If the application really benefits from the difference between AVX2 & AVX512, then why isn’t it already using CUDA/OpenCL, the GPUs hardware video transcoders, or in another way the GPU through libraries, whether rendering, AI, or anything else parallelisable. And if that isn’t worth the effort, because it is too small a task, then guess what, using a slightly older vector extension will also make very little difference to your end user experience.
@ckingpro
@ckingpro 2 місяці тому
@@autarchprinceps I would argue this is not necessarily the case. Certain string processing tasks can be done faster in AVX512 if the CPU supports processing them in 1 cycle (with exceptions to those that take two cycles like Zen 4 and Rocketlake). But they also don’t make sense doing it in GPU
@AndrewMellor-darkphoton
@AndrewMellor-darkphoton 2 місяці тому
AVX-5 1 12 never heard of that instruction set
@GaryExplains
@GaryExplains 2 місяці тому
😂
@Fetrovsky
@Fetrovsky 2 місяці тому
Thank you for this great video.
@GaryExplains
@GaryExplains 2 місяці тому
Glad it was helpful!
@monolofiminimal
@monolofiminimal 2 місяці тому
I came about this a long while back when I was trying to download an update for mpv-player and there was an x86 64 V3 version, but there was a post explaining what it was so it was all good.
@frednitney5831
@frednitney5831 2 місяці тому
Are there documents that accurately, completely, and, extremely importantly, concisely/tersely map and define baselines, extensions, instructions, and their CPUs? If so, which is the best? Thanks in advance!
@shanedavenport734
@shanedavenport734 Місяць тому
If I remember correctly the 80386 used a memory controller that was located in the Northbridge, which at the time was a separate chip from the CPU. Today the Northbridge is part of the CPU, so we really don't mention it much anymore.
@fixpontt
@fixpontt 2 місяці тому
_"Intel does not officially support AVX-512 family of instructions on the Alder Lake microprocessors. Intel has disabled in silicon (fused off) AVX-512 on recent steppings of Alder Lake microprocessors to prevent customers from enabling AVX-512. In older Alder Lake family CPUs with some legacy combinations of BIOS and microcode revisions, it was possible to execute AVX-512 family instructions when disabling all the efficiency cores which do not contain the silicon for AVX-512"_ does that mean v4 will never be a "standard" like v3?
@GaryExplains
@GaryExplains 2 місяці тому
It is because Intel has started using E + P cores, and it hasn't implemented it on the E cores yet, so it has disabled it on the P cores.
@Momi_V
@Momi_V 2 місяці тому
It is possible Intel might support AVX-512 even in future efficiency cores. There are tricks you can pull to get those instructions to "work" by splitting them up internally and working through them in multiple stages with the smaller execution units (AMDs current Zen4 / Ryzen 7000 processors support AVX-512 with two separate 256bit Floating Point units). This doesn't increase the total throughput, but at least you can run AVX-512 and don't have to disable it. However they still need extra space for the larger and wider registers (the ultra high speed storage inside the CPU for your variables), because AVX-512 specifies at least 32*512bit registers. Intel has also proposed AVX-10 which includes all of the useful new instructions of AVX-512 but can be implemented in 128, 256 and 512 bit variants. Because some of the performance improvements do not really come from working on 512 bits at a time (that's limited to specific workloads, mostly scientific, emulation and some compute). The general improvements come from stuff like BFloat16 (useful for AI), conflict detection, new bit manipulations and a whole bunch of other stuff, that would be nice to have even with smaller registers.
@ckingpro
@ckingpro 2 місяці тому
@@Momi_VIt seems Intel has abandoned AVX512 and is moving to AVX10 which will allow the features of AVX512 on 256-bit vectors. The other solution is the way AMD did it where they support AVX512 and the instruction optimizations but doing it twice cycle using 256-bit registers
@Momi_V
@Momi_V 2 місяці тому
@@ckingpro to be fair, "AVX-512" is a complete mess and doesn't even exist as one thing. By now it's about 20 different feature flags some of which are only supported on specific generations or SKUs... It's time to move to something less fragmented, I just hope they don't screw everything up with the anemic 128 bit version.
@ckingpro
@ckingpro 2 місяці тому
@@Momi_V That is true. Had it not for Intel’s segmenting the processors, we would have had v3 by now (though would have required bigger Atom cores)
@godnyx117
@godnyx117 2 місяці тому
Thank you for the video Gary, very helpful!
@GaryExplains
@GaryExplains 2 місяці тому
Glad you enjoyed it
@John.0z
@John.0z 2 місяці тому
Recently I "culled" my old, unused, Athlon-64 desktop. Now I see that this was a better decision that I had thought at the time! The new one is v4 😁
@TheGamer_Zero
@TheGamer_Zero 2 місяці тому
So basically, it's a marketing thing. That's why big companies designed they own computers and OSs.
@DavidAlsh
@DavidAlsh 2 місяці тому
Interesting. If I have a new CPU that supports x86_64_v4, does that mean my system will be faster if I compile the kernel myself rather than using the one distributed by my distro? I have never compiled the kernel before but it can't be that hard, might be worthwhile if there is an uplift in performance!
@kazedcat
@kazedcat 2 місяці тому
Short answer no. AVX512 are primarily instruction for AI and media processing. So if you are encoding a lot of video and you are using application that take advantage of AVX512. You might get a speed up with that type of workload. But for daily use it has no effect.
@drpainjourney
@drpainjourney Місяць тому
CachyOS (Arch bases) os v3 and v4 --- I use v3 with my AMD Ryzen 9 5900X, and anything is running really great.
@Winnetou17
@Winnetou17 2 місяці тому
Funny to see in the list "Gentoo is now offering x86-64-v3 packages". I assume it's on the binary packages, because normally you compile the packages, and it's trivial (and by far the easiest when compared to the other distributions) to compile it to use everything your CPU is capable of.
@xpusostomos
@xpusostomos 2 місяці тому
I'm confused, people are moving to ARM because it's RISC (reduced instruction set)... and yet they keep adding instructions to x86.
@GaryExplains
@GaryExplains 2 місяці тому
It is a little more complex than that. The new x86 instructions are generally SIMD instructions. Arm has also added new SIMD instructions. Armv9 has SVE2, for example.
@ChrisM541
@ChrisM541 2 місяці тому
Thanks for a fascinating upload. Considering all target CPU instruction sets (including V1-V4 extensions etc) are selected via compiler configuration, the only limiting factors to this are... 1) The capability of the CPU running the code. 2) The extension-awareness / capability of the compiler. With that, the real question is why do devs take so long to release versions targeting these extensions - if (pretty much) all that is required is a 'simple' compiler swich? Certainly, it's no easy task to develop a compiler that knows how to produce machine code that always selects the best optimisation when particular destination flags are selected. Over the last 30-40 years, we've put more and more trust in the compiler, while losing, more and more, the best speed/size optimisation tool possible - a human assembly language expert. Want to 100% guarantee your core/critical routines are speed optimized - become an expert in assembly. Sadly, total compiler faith is too embedded today.
@ckingpro
@ckingpro 2 місяці тому
The reason is for compatibility. For example, Ubuntu and others still target x86-64 baseline. Certain parts of the program can be sped up using things like ifunc to dynamically select a function at runtime or JIT in browsers/Java but you have to do specialized work for it. Many languages still don’t have easy SIMD support either.
@ChrisM541
@ChrisM541 2 місяці тому
@@ckingpro 'Compatibility with the majority' will, by definition, mean the lowest common denominator. That's not really what this video, and my point, is about. Instead, we're talking about significant numbers of configurations with more advanced extensions that do exist, and that would benefit from compiler targeting (where of course, the compiler is aware). Zen4 released almost 1.5 years ago. As we know, it is -v4 extension compliant. It would be extremely unreasonable for devs not to provide at least -v3 (if not v4) releases for this large and diverse group of users. It would be damningly unreasonable if they deliberately held back on its potential. Unfortunately, AMD's competitor has deep pockets.
@ckingpro
@ckingpro 2 місяці тому
@@ChrisM541 they don’t need to provide separate releases. Instead, they do runtime checks and use a more optimized version (like using AVX). That’s commonly used ranging from multimedia like FFMPEG to GIMP and so on. That’s the whole ifunc I was talking about. Where it has significant advantages for performance, you use ifunc. As for the latter, there’s already HWCAPS. And Distro developers need to take into consideration the costs/storage/bandwidth for this. Hence why we are seeing v2 distros for targeting processors made in the last decade to a few v3 as optional rather than requirement as most processors meet this. V4 is spotty and AVX512 benefits specific SIMD workloads only
@zackyezek3760
@zackyezek3760 2 місяці тому
@@ckingproAs a systems developer, the reason is that adding support for newer instructions via runtime detection is NOT the same thing and has significant limitations: 1) To do it right, any code using the newer instructions must be segregated into plugin shared libraries that are dynamically loaded. This is a nontrivial change to the software and calling the affected code is SLOWER- you now have to be go through C++ virtual functions or logically equivalent indirection to invoke the logic from the rest of the system. Only those with runtime support for the newer code will actually run faster at runtime; everyone else runs even slower than before. 2) When you set the baseline globally (e.g. the AVX2 compiler flag), you are telling the OPTIMIZER that it can refactor your code to use those instructions wherever it makes sense. A good modern compiler will then substitute them into places you can’t or wouldn’t do (1) for, like speeding up nested for loops or even improving assembly that originally came from code that isn’t yours (e.g. a template instantiation from a 3rd part or system library). Overall, it’s the difference between localized, hand-crafted optimization and global automated optimization. As a rule you do (2) wherever you can and (1) only when you must. I’m also not surprised these Linux distributions sometimes see slowdowns as well as vast speedups. Using the newer instructions isn’t always a straightforward substitution at the assembly level, and for some code it really needs manual refactoring before it’ll optimize correctly. The deeper reason for that is the new instructions usually expect their input to be ‘staged’ a certain way- e.g. a buffer whose address and size are both a multiple of 16 bytes, no holes- and the compiler may have to inject additional setup logic to enforce that. Or worse, the instruction will handle the suboptimal input directly but simply be way slower as a result.
@ckingpro
@ckingpro 2 місяці тому
@@zackyezek3760 I agree with you on the compiler optimizations. Even things like autovectorization is better if you allow newer instruction set extensions. In addition, inlining functions using newer instruction sets isn’t possible due to potential ABI incompatibilities. However, we have HWCAPS with the microarchitecture levels. I also agree that it isn’t straight-forward using newer instructions. You may also know that AVX512 on certain Intel CPUs can downclock the CPU so occasional or mixed usage can lead to lower performance. However, I disagree with your take on indirect function calls. Yes they are indirect, but even directly calling a function is indirect on Linux due to ASLR using PIE using GOT and PLT. The only thing you don’t get anymore from this is inlining. And we have HWCAPS for that
@GegoXaren
@GegoXaren 2 місяці тому
Want to know about the most hacky x86-64 binary format? It's x32 (look up x32 ABI) which is normal x86-64, but uses 32-bit pointers, this for cases where speed is king, but comparability is not.
@ckingpro
@ckingpro 2 місяці тому
It is mostly accurate. x32 is not even you need speed (x86-64 already does that) but rather when you are extremely memory constrained. x86-64 does use 64-bit pointers for a 48 and now 52-bit virtual address space. So you need double the memory for pointers from 4 to 8 bytes. Similarly, Apple Watch uses ARMv8 with 32-bit pointers for memory savings due to the 1 GB memory and why they used Bitcode to prepare the transition to 64-bit chips
@valenrn8657
@valenrn8657 2 місяці тому
Intel Sandy Bridge introduced the 1st AVX extension.
@hawkanonymous2610
@hawkanonymous2610 2 місяці тому
Yes, but not the one required for v3 unfortunately.
@valenrn8657
@valenrn8657 2 місяці тому
@@hawkanonymous2610 x86-64-v3 level needs AVX, AVX2, BMI2, MOVBE, XSAVE.
@valenrn8657
@valenrn8657 2 місяці тому
@@hawkanonymous2610 My comment is targeted for 4:35, _Advanced Vector Extension 2 (AVX2), also known as Haswell New Instructions, is an expansion of the AVX instruction set introduced in Intel's Haswell microarchitecture._
@GaryExplains
@GaryExplains 2 місяці тому
There should probably be a comma there somewhere, the sentence means that AVX2 was introduced in Haswell, not AVX.
@ckingpro
@ckingpro 2 місяці тому
Intel really held back v3 adoption. Also if anyone is wondering about small cores and meeting v2 requirement, Silvermont and up on the Atom lineup and Jaguar and up on AMD’s cat core lineup meet v2 requirements. As for how other OSes do it, macOS supported SSSE3 (not to be confused with SSE3 which it also supports because it is older than SSSE3) due to when they started transition to Intel. Since Sierra, they require Penryn so you can support up to SSE 4.1 by default. In addition, since macOS can support multiple architectures, Apple created x86-64h slice which meets x86-64-v3 well before the microarchitecture levels were a thing so you can optimize for both the baseline and Haswell out of the box. And Apple didn’t choose Pentium or Celeron or Atom so it can go back to 2014ish Macs while still supporting older with the normal x86-64 in the same binary. For Windows, it required SSE2 since Windows 8 for 32-bits and retroactively for Windows 7 since 2018. For 64-bit, Windows 8.1 required CMPXCHG16B (basically allowing atomics on 128-bits/16-bytes memory), LAHF and SAHF, and PrefetchW support (which is just to help the CPU prefetch relevant data. And your CPU doesn’t even have to support the prefetch, as Intel treated it as NOP (No Operation) until Broadwell). And so this excluded the earliest x64 CPUs only. Coincidently, these CPUs also support SSE3. v2 level requires SSE4.2, POPCNT and other instructions. Windows 11 finally requires processors with instructions beyond v2.
@timothygibney159
@timothygibney159 Місяць тому
The hate on this requirement is unreal. Especially for Windows
@ckingpro
@ckingpro Місяць тому
@@timothygibney159 the issue was that while something like x86-64-v2 is fine, but Microsoft required processors from 2018 or newer officially when Windows 11 came out in 2021. That means you could only upgrade if your computer was 3 or less years old. That was rather extreme
@ZipplyZane
@ZipplyZane 2 місяці тому
Am I the only one who gets frustrated by the use of Intel CPU codenames? Everyone seems to do that when talking about instruction sets and such, and I always have to go look up to see which processor it actually is, to be able to have a point of comparison with CPUs I actually know and/or own. I would love it if people would at least at one point say that Haswell means "4th generation Core" or the 4000 series. That way I'd be able to instantly compare it with, say, my new-to-me 5300u laptop. (Got it for $25, so very much worth it.) I much prefer the generation names because you can instantly compare them without having to memorize all the names and their order. It's not like Android or Apple who would at least use letters in alphabetical order.
@chuckwright6395
@chuckwright6395 Місяць тому
I'v been annoyed by that for a long time.
@AndrewMellor-darkphoton
@AndrewMellor-darkphoton 2 місяці тому
I was under the impression AVX-512 is being heavily reworked for big little. So what happened to x86-64-v4.
@sundhaug92
@sundhaug92 2 місяці тому
Modern CPUs support x86-64-v4, but AFAIK this is a matter of increasing the minimum requirement so you can take advantage of newer extensions
@TheEVEInspiration
@TheEVEInspiration 2 місяці тому
Intel wend lazy when they introduced efficiency cores and dropped support on AVX-512 on those. Now we have processors where some cores support it and other cores on the same chip not. Intel clearly is run by idiots, they should have at least supported it via microcode, even if it runs slower than a true hardware implementation.
@volodumurkalunyak4651
@volodumurkalunyak4651 2 місяці тому
@@sundhaug92 not all modern CPU's support AVX-512 (that is included in x86-64 v4). Intel, AVX512 support in those: server CPU's from Skylake-SP, workstantion from Skylake-X, desktop - only Comet Lake, mobile - only Tiger Lake. Desktop and mobile Alder Lake and Raptor Lake - no support for AVX-512 😟😟😟. AMD - AVX512 support in those: Zen 4, Zen4c for all market segments. Some Zen3 CPU's like 5800x3D and 5700X3D were announced very recently (no AVX-512 for those).
@valenrn8657
@valenrn8657 2 місяці тому
Intel has "AVX 10". Intel AVX10.1, *Optional 512-bit FP/Int* 32 vector registers 8 mask registers 'etc' the same as Intel AVX-512 Version based enumeration Intel Xeon P -core only Intel AVX10.2 *Optional 512-bit FP/Int* 32 vector registers 8 mask registers 256/512 bit embedded rounding 'etc' the same as Intel AVX-512 Version based enumeration Supported on P-core and E-cores Recompiled software is needed for AVX10. Intel AVX is such a mess.
@valenrn8657
@valenrn8657 2 місяці тому
@@volodumurkalunyak4651 Zen 3 has partial AVX-512 support e.g. AVX-512 vector AES and AVX-512 Carry Less Multiple quadword instructions for obvious benchmarking reasons.
@timewave02012
@timewave02012 2 місяці тому
My Gentoo system is optimized for my exact microarchitecture 😉
@myne00
@myne00 2 місяці тому
Why can't compilers build combo binaries which use the new instructions if they're there, and the old way of they're not? Sure, the binaries would be bigger, but space isn't really a big issue these days.
@olnnn
@olnnn 2 місяці тому
That is kinda what these "levels" originally were meant for, to use with a mechanism in the linux library loading system that would dynamically load different libraries depending on capabilities. Many performance critical libraries like the core c and c++ libraries, math libraries and video encoders area also programmed in a way so that they dynamically use functions that make use of different instruction sets depending on what is supported to speed things up - this is one reason why changing the compile flags to use these levels for everything often doesn't make such a massive difference since they are already being used in the areas that benefit most from them.
@ckingpro
@ckingpro 2 місяці тому
That’s what the dynamic linker now supports with these levels called HWCAPS. In addition, you can compile multiple copies of a function and use ifunc to select at runtime based on CPU capabilities. And of course, JIT can compile based on your CPU anyways
@louistournas120
@louistournas120 2 місяці тому
You can just write a program once and use an if else statement. An if else doesn't take very long to execute. Other solutions would be to use function pointers. This is possible in C and C++ but most other languages can't do it.
@ckingpro
@ckingpro 2 місяці тому
Yeah you can definitely set up static function pointers and initialize them at main. But I still consider the ifunc approach to be nicer.
@douggale5962
@douggale5962 2 місяці тому
The idiocy of linker developers is too extreme. printf("Hello world "); is 68KB with one architecture at a time. And in case you are inept, that optimizes down to calling `puts` in a shared object. Bunch of idiots can't handle multiple architectures.
@Sumire973
@Sumire973 2 місяці тому
Windows 11 is going to actually raise the CPU requirements to x86-64-v2 soon in the next major release according to the latest builds, the existing official Windows 11 minimum requirements are actually on paper and refer more to technical support, since Windows 11 in practice It still able to run on x86-64-v1 CPUs, but is finally being recompiled to only support instructions that meet the x86-64-v2 criteria. I suspect this is just the beginning and may in a few years Windows will only support x86-64-v3, with x86-64-v4 as recommended due to AI features.
@Martinit0
@Martinit0 Місяць тому
I think Windows 11 even today (early 2024) requires more than x86-64-v2. For example Intel Gulftown has MMX, and up to SSE4.2 but is not officially supported by Win 11.
@Autotrope
@Autotrope 16 днів тому
Watching this on a v2 Maybe I should upgrade this one of these years... I'm on ivy bridge too so I'm only *just* short of haswell
@_MasterLink_
@_MasterLink_ 2 місяці тому
I'm still riding on an AMD FX-8370 with Windows 11 patched to bypass all requirements. Even without UEFI and legacy only booting, it's surprisingly useful and fast. When I first got the FX series, it was crap, slow and worthless, but something happened over the past decade, it somehow got faster in software updates, and with newer games actually using multiple cores, and now it can play a lot of the modern game libraries (not all, and not all run "great", some run "well"), but alas, as a day to day driver, it still boots in 5 seconds, it still plays UKposts in 4k60, it still handles everything I simply throw at it, except software that might require a newer AVX than I have. But even then, I haven't felt the need to update it yet. Ironically enough, the only v3 CPU/computer I have would be my ThinkPad, but it's GPU's no longer get driver updates, meaning it's essentially more useless than my v2 desktop since my desktop can still have a modern GPU plugged in and boot with it, whereas my ThinkPad wouldn't be portable anymore if plugging in a modern GPU over Thunderbolt (and not even supported properly, it's a W541, so that's kinda a hack which cannot even use the internal LCD). If I am forced to update the desktop, I'll have no choice, but I will resist as long as I can. Not out of spite, but more so because I don't feel I should pay money to buy a new computer when again, it still can boot up in mere seconds, and be ready to be used, and actually run circles around my ThinkPad which IS a v3. I don't spend money when I don't need to, so if software reaches a point that my CPU is no longer a target, I'll just hold onto the OS until the SSL certificates expire and I simply have no other choice. Some of us just can't afford the insane prices, it's still a reason my GPU is a 1080 Ti, and not even an RTX. Can't afford it, don't want to, because I have 11GB of VRAM, and most games are quite comfortable with this. Alan Wake 2? Yea vkd3d actually helped enough to let the game be playable at least, but I didn't care for the game so I never continued to play. Star Field? What a boring game "to me" (stressing this is only my opinion and not being presented as a fact), so I didn't care that my CPU was too slow for it. The games I do play, run at or exceed 60fps, so what matters most to me is that I am happy with the machine, thus I refuse to pay up for an upgrade I feel I don't need. I also understand at a point, it's not feasible to target this machine anymore, but as far as I'm concerned, that's not my problem until it actually is my problem, and right now my v2 is not actively refusing to run any software I have thrown at it, except the rare few that weren't even important enough to actually care at all. Linux isn't my OS of choice (it was for 2 decades but I stopped), and Windows is not actively targeting V3 yet (except maybe the Rust based kernel, which I await to try and see, I have the ISO to it, and VirtualBox ready, which using AMDv should tell me if it works or not without actually installing it).
@ckingpro
@ckingpro 2 місяці тому
Note that Windows 11 requirement doesn’t include processors that only support x86-64-v3. As the video mentioned, it supports Tremont which doesn’t support v3. However the supported processor list is above v2. In addition, if you look at the requirements at first, it only listed SSE 4.1 which is just under v2. However, there have been reports that 24H2 no longer works on before 2010 Core 2 hardware, but not sure how well it applies to old AMD processors. Though my laptop runs Windows 11 and meets requirements, I am still not happy with Microsoft throwing useful machines out. With a system as old as yours, you might even be able to emulate UEFI using DUET should the time come when Microsoft requires UEFI
@louistournas120
@louistournas120 2 місяці тому
I use an Ath chocolate lon X2 2.8 GHz as a secondary machine. I put in 16 GB DDR3 which is plenty of RAM for weeb surfing. I am guessing that video decoding on youtube and such is done by the CPU when using Firefox on Kubuntu. The UPS reports how much watt it is using. If I use Br chocolate ave or Ch chocolate rome, the wattage is lower. It is using the GPU. So, in your case, with Wi chocolate n 11, it is probably using the GPU, so watching 4K videos is not a problem.
@Sumire973
@Sumire973 2 місяці тому
The latest insider builds suggest that Microsoft is already beginning to recompile Windows 11 to now really raise the absolute minimum CPU requirements to x86-64-v2, and I'm afraid that the latter can no longer be patched, when a CPU without the necessary instructions tries to run a program that requires mandatorily more instructions than the user's CPU has, it will not work at all, it will only give segfaults or crashes at startuup.
@ckingpro
@ckingpro 2 місяці тому
@@Sumire973 The good news is that MasterLink's FX 8370 meets requirements for POPCNT which is needed by the Windows 11 insiders build. It also meets the x86-64-v2. I still suggest him to wait until after October 2025 to upgrade (unless people find ESU bypasses again) in case Microsoft bumps it up again.
@ckingpro
@ckingpro 2 місяці тому
@@louistournas120 Is there a reason you added chocolate in the comment? Regardless, I agree that 1080 Ti is being used to decode video as it supports VP9 decode (which is what UKposts uses for videos bigger than 1080p (specifically 1152p or higher). Edit: Also your athlon x2 could either support Windows 11 build (though still stay on 10 until October 2025) if it supports SSE4a. Regardless, it does not support x86-64-v2 so if SSE 4.1 or 4.2 becomes a requirement, you will be SOL. Some distros are requiring x86-64-v2 as well but not all. And you will always have Debian (which still supports non PAE 32-bit CPUs)
@pugster73
@pugster73 2 місяці тому
Surprised that the Intel Atom processors didn't support V3 until last year. IE, Intel N100 or N95.
@arturpaivads
@arturpaivads 2 місяці тому
I may be wrong but Ryzen 7000 has v4 support. They do support AVX-512. And I think they are the only CPUs that do for now.
@volodumurkalunyak4651
@volodumurkalunyak4651 2 місяці тому
AMD Zen4 server (Genoa, Genoa-x), HEDT (Storm Peak), desktop (Raphael) and mobile (Phoenix, Hawk Point, Dragon Range) offerings are all x86-64v4 (have AVX-512 support)
@vk3fbab
@vk3fbab 2 місяці тому
I think that developers should always try to target the lowest required hardware. For example there is no real need for libc to require AVX512. We should try to keep support for things for as long as they are useful, so long as they don't stagnate forward progress. Luckily UNIX oses do a good job of this as one thing these oses do is breathe a new lease of life into older hardware.
@KuruGDI
@KuruGDI Місяць тому
I wonder how hard it would be to _reliably_ ship only the source code, check your CPU on install and then compile (or recompile) what is needed for your system. So if your CPU does not support v4, the kernel or parts of it would be compiled right on install. But I'm sure if it was that easy, it would have already been done...
@GaryExplains
@GaryExplains Місяць тому
What system would do the compiling because you only shipped the source code?
@KuruGDI
@KuruGDI Місяць тому
@@GaryExplains I'm not an expert, but I could imagine something like very basics system that cares about this. Windows has something similar (of which I forgot the name). It was some kind of stripped down version that could also be used for trouble shooting and solving problems.
@GaryExplains
@GaryExplains Місяць тому
You might want to look at Gentoo and Linux From Scratch.
@KuruGDI
@KuruGDI Місяць тому
@@GaryExplains 1) I wish I was _that_ good and well educated in the how-to of Linux 😞 2) This would might work, but as long as it's not at least as easy to use like an Ubuntu installation, it will not work. (For me it's not only, but also about re-using or extending the usage of old machines, which is of course much harder if your distro does not support older CPUs)
@GaryExplains
@GaryExplains Місяць тому
There will always be distros that support older machines, and any distro that is built for desktops or for non-tech users will always support the widest possible number of CPUs etc.
@surenbono6063
@surenbono6063 13 днів тому
..is there a chance of 128-bit era...while most BIOS are still 16/32-bit apart newer UEFI
@troysright
@troysright 2 місяці тому
Great Video. Can you please do a video comparing S24 Ultra's Gen 3 NPU vs the Google 8 Pros Tensors NPU ? Which is actually the better NPU ?
@GaryExplains
@GaryExplains 2 місяці тому
NPUs are really hard to test correctly.
@Ronny999x
@Ronny999x 2 місяці тому
Next gen Tensor coming soon isn't it? That should be superior for sure.
@GaryExplains
@GaryExplains 2 місяці тому
@Ronny999x Everything in tech is "coming soon". A new GPU, a new CPU, a new NPU... It never ends!
@Ronny999x
@Ronny999x 2 місяці тому
@@GaryExplains Yes but Google Tensor is closely related to Samsung Exynos. And considering Exynos 2400 recently launched. The Google Tensor should be the next to Launch. Also on 4nm. I hope we get news on that soon.
@GaryExplains
@GaryExplains 2 місяці тому
@Ronny999x It will launch in October, like it does every year.
@ryshask
@ryshask 21 день тому
main subject at 7:20
@SnijtraM
@SnijtraM 2 місяці тому
"Baseline x86-64 included MMX, SSE, SSE2". I believe that the earliest AMD64 processors did not have SSE2, and that's why 64-bit Windows hasn't supported it since .. what was it, Windows 7? Or was it Windows 8, or 10? I have one old machine that must use 32-bit Windows to run at all.
@GaryExplains
@GaryExplains 2 місяці тому
I don't think so. The very first AMD64 processors had SSE2. If you have an AMD CPU that only supports 32-bit then it is likely a 32-bit processor, and not AMD64.
@TheEulerID
@TheEulerID 2 місяці тому
One minor point, is that it is trademark, not copyright that was the relevant factor for why Intel went for Pentium branding rather than using a number. You can't copyright a single word either. Copyright and trademark might sound very similar, but the former will run out in time and requires substantial creative content. However, trademark can last forever, provided that they are renewed. Trademarks also only protect things within a trading/commercial environment. Intel could sue me if I infringed their trademark by, say, selling a computer related product using the name Pentium, but they cannot sue me, or anybody else, for using the word Pentium in other contexts. The difference can be important. Disney try and protect their control over Mickey Mouse by trademarking the designs related to the character, not just by copyright. Indeed the first ever film in which Mickey Mouse appeared has just dropped out of copyright.
@GaryExplains
@GaryExplains 2 місяці тому
Thanks for the additional info.
@TheEulerID
@TheEulerID 2 місяці тому
@@GaryExplains I've been following too many lawyer channels I suspect. However, it's not the trivial difference that people might think, and the whole are of Intellectual Property Rights (IPRs) is complex, but it also has a huge effect on people in the IT industry. For example, the US Supreme Court has ruled that APIs cannot be copyrighted as they are interface definitions, not creative works. Algorithms can be patented, but only under special conditions. There are also debates about whether ISAs can be protected, and the differences between patents, copyright and trademarks all come into play. Keeping control of IPRs with ISAs gets very complicated, but the owners of those tend to add multiple layers of patents and other mechanisms for that purpose. As the major commercial ISAs are all subject to continuous development, with new features and additions, then the goalposts can be kept moving, even as things run out of their patent.
@John.0z
@John.0z 2 місяці тому
@@TheEulerID Most interesting, particularly that APIs cannot be copyrighted. Thank you. The law is a weird thing.
@GoatZilla
@GoatZilla 2 місяці тому
Some of these extensions seem to be ... really specialized; not exactly things you'd be doing in the kernel itself. There are probably some features that a kernel might care about (VM extensions), but outside of detecting features and managing extra context and resource allocation (i.e. treat like a device) in case there's a limited set of new resources associated with the extensions, the rest seemingly should be kicked to userspace. It just feels a little weird to totally break a distro for, what, some extra vector SIMD instructions. We've seen this with other architectures with their alphabet of feature sets and extensions.
@GaryExplains
@GaryExplains 2 місяці тому
We are talking about more than the kernel, also all the libraries, runtimes, userland tools, etc.
@GoatZilla
@GoatZilla 2 місяці тому
@@GaryExplains I know. I'm essentially saying if this is kicked over to userspace, and we have so many tools in userspace to do stuff like fat binaries and multilib whatevers and container armor whatever that it doesn't seem to make a lot of sense to completely drop support for an architecture just because, say, it doesn't have some SIMD vector instruction that you barely use or could live without.
@GaryExplains
@GaryExplains 2 місяці тому
Yeah, but as I said this is on a per distro basis, so clearly the distro makers see benefits and those out weigh the potential negatives (like loss of users). Also, some will likely ship multiple variants or have repos for v2 and v3 builds etc. I don't see any problems.
@GoatZilla
@GoatZilla 2 місяці тому
@@GaryExplains By that reasoning, it's fine that Windows 11 doesn't run on 7th gen hardware because you can always go out and grab a copy of Windows XP somewhere. Sure, there are different distros. But if the distro you *need* drops support for your architecture, I kind of think that might be a problem.
@GaryExplains
@GaryExplains 2 місяці тому
@@GoatZilla No, not quite, because a) you can't buy a legal copy of XP anymore, b) XP isn't supported any more. In the case of Linux you will find a free and fully supported distro that meets the needs of your hardware.
@Barnardrab
@Barnardrab 2 місяці тому
Shortly after the release of the 8-bit Nintendo Entertainment System, they already had 32 bit processors? Why didn't Nintendo contract with Intel?
@GaryExplains
@GaryExplains 2 місяці тому
Two main reasons. Price is one. It is said that the NES was going to be a 16 bit system, but they went with 8 bit because of cost. Second these devices take a couple of years to design, so work started in it before the 386 came out. So Nintendo had no knowledge of what was coming next from Intel.
@ytguy2010
@ytguy2010 2 місяці тому
For Windows 11 and AMD CPU's, the requirement from Microsoft is not the Zen version, but rather the model number series. The requirement is AMD 3000 series or higher. The Ryzen 3600 is Zen 2, but the Ryzen 3400G is Zen+. Both will meet the requirement. AMD's naming scheme is stupider than Intel's naming scheme.
@GaryExplains
@GaryExplains 2 місяці тому
It is actually even more complicated than that since the Ryzen 3 2300X, the Ryzen 5 2600, 2500X, 2600E, 2600X, and Ryzen 7 2700 are all supported as are several Athlon processors including the 7220U. Of course, in the video I was giving a rule of thumb, it wasn't a video about which processors are supported in Windows 11.
@vasudevmenon2496
@vasudevmenon2496 2 місяці тому
Hmmm so buying used ThinkPad with coreboot with SSD and Ram upgrade seen unwise now especially if you use fedora or other distros mandating new x86 revisions
@GaryExplains
@GaryExplains 2 місяці тому
What distro are you using that is mandating x86-64-v2 or -v3? I don't think Fedora is? Is it?
@vasudevmenon2496
@vasudevmenon2496 2 місяці тому
@@GaryExplains currently using Linux mint Debian edition 6 after jumping from fedora to Ubuntu
@GaryExplains
@GaryExplains 2 місяці тому
So that should be OK. There is even a 32-bit version. I don't think your laptop purchase was unwise.
@vasudevmenon2496
@vasudevmenon2496 2 місяці тому
@@GaryExplains no it's running x86-64 build. It's been more than a decade since i tried Debian and was amazed the packages were newer than Ubuntu.
@GaryExplains
@GaryExplains 2 місяці тому
What I meant was that since it still supports 32-bit then baseline x86-64 obviously will be supported for a long time to come yet.
@talibong9518
@talibong9518 2 місяці тому
Might as well just make a replacement for x86 at this point, it's becoming a compatibility nightmare. Like a x86-64 without all the instructions that aren't really needed anymore, and just revise it every few years whilst keeping compatibility in mind.
@GaryExplains
@GaryExplains 2 місяці тому
Arm? RISC-V? Also. if you remove the instructions that aren't really needed any more but are still used occasionally then how do you keep compatibility?
@LordApophis100
@LordApophis100 2 місяці тому
Intel is working on that, they call it X86S. It will have CPUs start in 64 bit protected mode and drop all things 16bit.
@GaryExplains
@GaryExplains 2 місяці тому
@LordApophis100 Indeed, I have a video about it here: ukposts.info/have/v-deo/g4OSm41snqOB0Gg.html
@loc4725
@loc4725 2 місяці тому
*Suse:* "Suse-er" *"Can't copyright numbers":* There was a court case between Intel and AMD over AMD's licence to produce x86 processors. AMD claimed they could use _any_ instructions from any Intel processor with an x86 numerical name, so to save on court costs Intel responded by changing from numbers to alpha names ("Pentium" etc).
@GaryExplains
@GaryExplains 2 місяці тому
I think you are confusing two separate things. 1) In March 1991, a judge sided with AMD, and invalidated Intel's trademark on the 386, by claiming it was generic. 2) AMD claimed that, due to the contract between it and Intel, it had the legal right to Intel's microcode for multiple generations of x86 chips. It took until 1995 for that one to get sorted out.
@loc4725
@loc4725 2 місяці тому
​@@GaryExplainsAhh yes. Seems that in the U.S. you can, subject to some rules trademark bare numbers. I this case Intel had allowed the potentially trademarkable '286' & '386' to become generic, thanks in part to their cross-licencing agreement with AMD, hence the change. Also I forgot to mention but code scheduling & branch prediction optimisations by the compiler seem to generate the greatest returns for most workloads vs. just making the compiler emit newer instructions.
@frankklemm1471
@frankklemm1471 2 місяці тому
The first 32 bit CPU by Intel was the Intel iAPX 432 . It was released in 1981. One of the three big fails of Intel. Extremely complex and extremely slow.
@MonochromeWench
@MonochromeWench 2 місяці тому
At this point v2 should cause no problems for any enterprise linux distro (Windows has required v2 level since 8.1). Desktop distros might have to contend with people intentionally using modern linux on obsolete hardware
@ckingpro
@ckingpro 2 місяці тому
Windows has not required v2 since Windows 8. Windows 8.1 upped the requirement requiring CMPXCHG16B (basically allowing atomics on 128-bits/16-bytes memory), LAHF and SAHF, and PrefetchW support (which is just to help the CPU prefetch relevant data. And your CPU doesn’t even have to support the prefetch, as Intel treated it as NOP (No Operation) until Broadwell). And so this excluded the earliest x64 CPUs only. Coincidently, these CPUs also support SSE3. v2 level requires SSE4.2, POPCNT and other instructions. Windows 11 finally requires processors with instructions beyond v2.
@FrankHarwald
@FrankHarwald Місяць тому
It's kind of uncommon & confusing in calling these microarchitectures, because in the recent processor architecture world the word microarchitecture specifically already means something different. Instead it's more appropriate & common to either call these architecture extensions or subarchitectures.
@GaryExplains
@GaryExplains Місяць тому
Well that is what the compiler people (lists.llvm.org/pipermail/llvm-dev/2020-July/143289.html) and RedHat (gitlab.com/x86-psABIs) call it. I guess your argument is with them. Also, it is "microarchitecture levels" not just "microarchitectures".
@xcoder1122
@xcoder1122 2 місяці тому
Debian still offers i386 builds as of today, which is kind of stupid as those are in fact i686 builds, as they won't even run on i586 CPUs. Other distributions at least name their builds correctly.
@bishnu__newar
@bishnu__newar 2 місяці тому
Can you make video on New harmony os announced by huawei vs Android vs ios
@GaryExplains
@GaryExplains 2 місяці тому
Sorry, but that is unlikely as Huawei stuff is basically China only, which means a) I don't have access, b) isn't of interest to me or most people in the west.
@jimwinchester339
@jimwinchester339 2 місяці тому
Just publically registered by disdain for this trend. This is TOTALLY opposite of from among the reasons Linux came to be to begin with. Supporting older hardware was one of the thing Linux was famous for.
@GaryExplains
@GaryExplains 2 місяці тому
Linux isn't stopping support for old hardware. Did you actually watch all the video?
@FougaFrancois
@FougaFrancois 2 місяці тому
No, 32 bits was not given up ... every single intel CPU yet can run a 32 bit OS, the backward compatibility is still there, all the way to mov al,2 in 8 bit. That is the beauty and very often advantage of x86, it may change in a near future.
@GaryExplains
@GaryExplains 2 місяці тому
I don't think I said that 64-bit CPUs can't run 32-bit software? Where did I say that? You might be interested in my video on x86-s.
@timewave02012
@timewave02012 2 місяці тому
Long mode does give up 16 bit virtual mode, but emulation is fast enough it didn't matter. The real magic of the amd64 instruction set was being able to mix 32 bit instructions into 64 bit code and have it behave predictably and usefully (e.g. "xor eax, eax" clears all of rax). With variable length opcodes of x86, you really wanted to keep using those 32 bit instructions to keep code size down (which is a big deal for cache and pipelining), so all you had to worry about was 64 bit pointers, and instruction pointer relative addressing somewhat made up for that. Doubling general purpose registers, guaranteeing SIMD support, passing floating point arguments via SSE registers, and standardizing on a pass by register calling convention helped a lot too. I didn't watch the video.
@ruben_balea
@ruben_balea 2 місяці тому
For me the fact that my old PC will be *immune to Windows 11* is great news, I was afraid that any day in the morning it would appear updated to an OS that can't do more than Windows 10 but that has a dumb (or is it dumb oriented?) user interface that is reaching the same level of functionality as the MS-DOS Shell "GUI"
@GaryExplains
@GaryExplains 2 місяці тому
What are your plans once Windows 10 stops receiving security updates?
@ruben_balea
@ruben_balea 2 місяці тому
@@GaryExplains *Ignorance is strength* and I can ignore that for another 17 months. I only need it for games after all.
@GaryExplains
@GaryExplains 2 місяці тому
So you aren't using your Windows 10 PC for anything else, you didn't use it to watch this video or write your comments? You don't do any online shopping or banking with it, just games, nothing else.
@ruben_balea
@ruben_balea 2 місяці тому
​@@GaryExplains Of course when I'm using a computer with Windows I do everything from Windows, but I meant that I can stop doing all that from Windows once updates stop coming in about 18 months and leave my Windows computers just for gaming. In any case, it seems that they are not going to be compatible with Windows 11 for more and more reasons, first was the CPU model, the lack of TPM... things that could be disabled in the registry, but if they decide that the kernel needs a certain type of instructions not supported by the CPU the registry tricks will be useless. But if I can't use Windows 11 it's okay because I already use Debian too, I just never tried to install Wine or Proton to be able to run Windows games on it because I just didn't need to bother doing it. I already tried to use Windows 11 but I can't stand the absurd changes they made to the desktop and toolbar or to the file explorer, it's as if they were trying to create a Windows for dummies, and it's not just me or there wouldn't be several projects to try to make it usable again... I already stopped updating to new versions of MS Office when they decided it was cool to hide the menu commands, and honestly I prefer not to know what Office 365 is like after another 20 years of such "improvements"
@ruben_balea
@ruben_balea 2 місяці тому
@@GaryExplains Also I'm not against updates, I update everything as soon as possible, sometimes I even restart my Samsung phone in the middle of a WhatsApp chat if I get a notification of a new update, and I'm really happy with my "second hand" (it was a customer return and I saved half the money) S10 Little because it gets regular updates.
@SasisaPlays
@SasisaPlays 2 місяці тому
x86 as a CISC unit is kinda bloated too much already, with huge die area covered by almost never used instruction handlers. And now we get new ISA with even more bloat? Does it worth it? I wonder if extended instruction sets actually give any noticeable performance over old CISC or RISC architectures. Depends on application and compiler i guess, but as of now, LLVM Clang has very weak, almost negligible optimisation for latest x86 ISAs, same with gcc iirc. To me personally, it seems as just a pure marketing and potentional threat of hardware bitlocking, like it’s already happening with new TPM and Secure Boot modules, which obviously are created to try and let microsoft monopolize OS market and keep you from your right to own your devices(like it’s already happened with your phones).
@GaryExplains
@GaryExplains 2 місяці тому
Interesting. A couple of questions if I may. 1) x86-64-v3 is marketing by who and for who? 2) How does secure boot give an advantage to Microsoft?
@SasisaPlays
@SasisaPlays 2 місяці тому
@@GaryExplains 1) appearance of new ISA and extended instruction sets allows development of new cores, which can be merketed as better, more efficient, because of new instructions, that allegedly increase performance, when in reality, difference may be very small. Also it lets microsoft and other developers to eventually drop support for older hardware, even if it's not necessary at this point of time. I may be biased though, as i am RISC enthusiast and hardware and software engineer in this field currently, and actually new ISA is an obvious point in life cycle of any architecture, even ARM does it. 2) Secure boot allows boot of only signed bootloaders, which may allow companies, like microsoft to slowly make their os an only installation option. Yes, currently secure boot is optional, but we already have examples of motherboards(developed by microsoft partners for their devices), that make secure boot impossible to disable. Problem is that Windows is the only OS for which signature key is provided, therefore motherboard providers may one day make secure boot harder or impossible to disable and lock PC for Windows use only. Yes, that's too pessimistic and chances are not high, but that's what happened to mobiles, why won't they at least try it on PC? If normies would eat it, installing other os may become way harder, increasing their OS market. At our time, even if it doesn't seem like a big problem, you should acknowledge, that companies only care about profit, and if you let them, they will steal any rights from you. Also lately more people, even normies, consider switching to Linux, which is good, but not for Microsoft. Time will show how things turn out, i hope my alert is wrong.
@GaryExplains
@GaryExplains 2 місяці тому
You know there are Linux distros that boot with secure boot. It isn't a Microsoft thing.
@SasisaPlays
@SasisaPlays 2 місяці тому
@@GaryExplains depends on secure boot keys. Yes, there are Linux distros that boot on secure boot, but it may become more strict in the future, limiting signature only to profitable eterprise distros, like redhat, or some motherboards can be released only for specific OS. Currently, secure boot is not a problem and my examples, how this tech can be evil are far fetched, but im just afraid it may become a huge problem in future. They can change name of the technology, but this idea should be considered a threat, in my opinion. I may be wrong, and I hope i am, but i saw technology becoming more and more closed behind thousands locks, stealing your right to own, change and repair without paying overpriced fees to the manufacturers. Look at the nonsense HP is doing with their printers, or apple, which already made their products a peak in the anti consumer practices area. It's my personal opinion, but i believe it's correct: All companies are naturally evil, you should take all their products and intentions as an attempt to fool and use you for profit, you should never trust them in anything. Your view on this may be different, of course, I'm just sharing my concerns.
@olmsfam1
@olmsfam1 2 місяці тому
Can't copywrite names. Pentium isnt Copywrite it is Trademarked :)
@andrewdunbar828
@andrewdunbar828 2 місяці тому
Debbie Anne
@GaryExplains
@GaryExplains 2 місяці тому
🚨
@patrickproctor3462
@patrickproctor3462 2 місяці тому
AVX was introduced in Sandy Bridge, not Haswell.
@GaryExplains
@GaryExplains 2 місяці тому
Correct, I said AVX2 was introduced in Haswell. That is why AVX2 is sometimes called the Haswell New Instructions.
@Muhammad-sx7wr
@Muhammad-sx7wr 2 місяці тому
It is a chip architecture which is on its way out which is inefficient and Bloated with so many hidden spyware instructions from the likes of Intel and even AMD.
@JarppaGuru
@JarppaGuru 2 місяці тому
step to 128
@GaryExplains
@GaryExplains 2 місяці тому
Eh?
@ckingpro
@ckingpro 2 місяці тому
The reason we moved to 64-bits was that 32-bit memory only allows up to 4 GB of memory. 64-bits (ok really while pointer size is 64-bits, the virtual address is 48-52-bits) ranging from 32 to 512 GB while leaving room for up to 16 exabytes of memory. In terms of registers, it already supports 128-bits for SIMD and 256 in many CPUs and some even have 512-bits vector registers
@TheEVEInspiration
@TheEVEInspiration 2 місяці тому
It's actually good if Windows 11 requires newer hardware. If anything they are much too slow with requiring more modern CPUs. It took them ages to dish 32 bit for example. And with all the security flaws existing in older gen processes, it's better to have new OS require new processors.
@Fetrovsky
@Fetrovsky 2 місяці тому
Pentium Pro*
@GaryExplains
@GaryExplains 2 місяці тому
I guess you are alluding to the fact that the P6 architecture was first in the Pentium Pro, yes that is true, but I didn't say the Pentium II was the first, I said that other microarchitectures were released and I gave examples of where they were used. It was also used in some Celeron processors, but I didn't mention those either.
@Fetrovsky
@Fetrovsky 2 місяці тому
@GaryExplains Yes, the Pentium II was the more mainstream. I actually thought you were mentioning the products that introduced each new architecture. Thanks for your reply!
@halfsourlizard9319
@halfsourlizard9319 2 місяці тому
/ʊˈbʊntuː/
@paulmilligan3007
@paulmilligan3007 2 місяці тому
I think debbie-anne linux is a great idea to get more women into Linux😂
@Traumatree
@Traumatree 2 місяці тому
It is AMD that produced the x86-64 architecture (and saved us all from the train wreck Itanium), not Intel. @2:54, you wrote "for Intel EM64T compatibility", it is the reverse: it is Intel who is compatible with AMD64.
@GaryExplains
@GaryExplains 2 місяці тому
Writing for "Intel EMT64T compatibility" does not imply in anyway who created the x86-64 standard and I then go on to say explicitly who did 🤦‍♂️🤷‍♂️
@GaryExplains
@GaryExplains 2 місяці тому
Remember that the levels where only defined in the last few years, dropping support for 3DNow is in hindsight to bring a baseline of compatibility between AMD and Intel.
@GaryExplains
@GaryExplains 2 місяці тому
I just rewatched the video and literally I said about AMD being first, 2 seconds after at 02:56 😂😂😂
@pf100andahalf
@pf100andahalf 2 місяці тому
First
@flatD1
@flatD1 2 місяці тому
X86 is the dying CISC Instruction Set Architecture from Intel. Nothing to cry about. Just obsolete. It goes the same way, as any technology. From birth to death.
@YouTubeGlobalAdminstrator
@YouTubeGlobalAdminstrator 2 місяці тому
You must be fun at parties...
@shanent5793
@shanent5793 2 місяці тому
Good thing nobody's talking about using Intel's x86, the information was just for background
@GaryExplains
@GaryExplains 2 місяці тому
You obviously use a different definition of obsolete than I do.
@ckingpro
@ckingpro 2 місяці тому
It is hardly dying. And even ARM, Apple chip designers and Intel chip designers admitted that CISC/RISC debate mattered when we had far fewer transistors for decoding. For a long time, Intel CPUs decode into microOPs and ARM even has complex instructions now (and ARM chips can do microOP fusion into complex microOPs as well)
@timewave02012
@timewave02012 2 місяці тому
For decades, x86 has been a RISC machine with a CISC front end. Compilers target the subset of instructions that manufacturers tell them they spent their transistor budget on, with the lesser used ones implemented in microcode. The SIMD units also follow RISC philosophy much more than CISC, and that's where the performance matters anyway.
@halfsourlizard9319
@halfsourlizard9319 2 місяці тому
/ˈdʒɛntuː/
Arm vs x86 - Key Differences Explained
20:38
Gary Explains
Переглядів 373 тис.
Why VPNs are a WASTE of Your Money (usually…)
14:40
Cyberspatial
Переглядів 1,4 млн
[실시간] 전철에서 찍힌 기생생물 감염 장면 | 기생수: 더 그레이
00:15
Netflix Korea 넷플릭스 코리아
Переглядів 37 млн
Парковка Пошла Не По Плану 😨
00:12
Глеб Рандалайнен
Переглядів 12 млн
The Rise of Unix. The Seeds of its Fall.
16:51
Asianometry
Переглядів 447 тис.
RISC vs. CISC: Understanding the Differences and Pros/Cons of Each Architecture
20:32
The Game of Risk - Numberphile
10:32
Numberphile
Переглядів 886 тис.
x86 Emulation on Arm CPUs - Better on Windows or macOS?
14:23
Gary Explains
Переглядів 27 тис.
WHY IS THE HEAP SO SLOW?
17:53
Core Dumped
Переглядів 144 тис.
3 Types of Algorithms Every Programmer Needs to Know
13:12
ForrestKnight
Переглядів 407 тис.
Why RedHat is BAD
7:12
Titus Tech Talk
Переглядів 51 тис.
Why Are There Only Two CPU Companies?
4:54
Techquickie
Переглядів 1,3 млн
Распаковка айфона под водой!💦(🎥: @saken_kagarov on IG)
0:20
Взрывная История
Переглядів 9 млн
Интел подвинься, ARM уже в ПК!
14:06
PRO Hi-Tech
Переглядів 136 тис.
Iphone yoki samsung
0:13
rishton_vines😇
Переглядів 9 млн
I7 2600K тест в играх и сравнение с AMD Ryzen
17:53