Tuesday @ 1130 ISA Shootout - a Comparison of RISC V, ARM, and x86 Chris Celio, UC Berkeley V2

  Переглядів 47,078

RISC-V International

RISC-V International

7 років тому

КОМЕНТАРІ: 67
@benjaminscherrey1124
@benjaminscherrey1124 4 роки тому
Interesting talk for sure. Gives insights over the debate between CISC/RISC but also practical information across real-world CPU-intensive programs. For being so young, RISCV has come a long way and is already quite competitive architecturally with the most advanced processor ISAs. Now it's a matter of implementation details or figuring out some new ISA that is completely different.
@pm71241
@pm71241 6 років тому
Why did that guy walk of with the microphone and ruin otherwise good sound during the very interesting QnA ?!?
@Wambotrot
@Wambotrot 5 років тому
CAN YOU INTRODUCE YOURSELF, PLEASE?
@marijnstollenga1601
@marijnstollenga1601 7 років тому
Great talk
@inraid
@inraid Рік тому
Great talk! Thanks!
@ErikBjareholt
@ErikBjareholt 7 років тому
Awesome talk, the hype is real
@JonMasters
@JonMasters 5 років тому
Great presentation. However one nit - it doesn't factor in macro op fusion across all ISAs
@benjaminscherrey1124
@benjaminscherrey1124 4 роки тому
Presumably CISC ISA is a higher level of macro op fusion and so the efficiency value is already built in. What's being suggested is a mechanism where a RISC machine can get the same value while keeping their ISA simple.
@movax20h
@movax20h 3 роки тому
The beauty in case of RISC-V is that, a process / micro-arch implementation can choose not to do macro-op fusion, i.e. to lower the power, complexity or area, and the same binary will still work, just slower. But high performance core can take use of it. Without changing ISA, and most of them time without recompiling even. Also because RISC-V ISA is simple, it is likely that marco-op fusion can be implemented cheaper and easier on RISC-V microarchs, and use less cases to implement, or maybe implement some on case by case basis (i.e. only implement the ones that don't require extra register file read ports). On CISC like x86 macro-op fusion can be harder, plus the instructions for fusion would be bigger. On RISC-V it could be 2+2 bytes, or 2+4 or 4+4 byte instructions (and which to support is also in hands of the microarch designer). So at most 8 bytes, but probably 6 bytes on average. On CISC, the instructions sizes might be bigger. On CISC, you can think of some instructions, i.e. some 5-7 byte x86 instructions are pretty complex CISC instructions, doing loads, shifts, multiplies, adds. The problem with that on CISC like x86 is that as a process designer you MUST implement them, they are part of ISA. On RISC-V that 5-7 byte instruction will be 2 or 3instructions often, adding to 6 or 8 bytes often, which often will be less dense, but 1) you don't need to implement them, 2) you can, and take these 2 or 3 instructions and fuse them, or fuse partially. CISC already probably takes that complex 5-7 bytes instruction (sometimes even just 4 bytes), and splits it internally into 2 or 3 micro-ops, because otherwise it is hard to track everything and complex logic between all instructions in the pipeline. By doing a selective macro-op fusion, it can be easier to design and decide what to do, depending on acceptable complexity, CPU application, performance requirements, etc. Also compiler support could be easier.
@esra_erimez
@esra_erimez 2 роки тому
My dad hasn't been this excited about computer hardware since the 6809
@vishwabhai5195
@vishwabhai5195 Рік тому
good to know it
@maxh96-yanz77
@maxh96-yanz77 5 років тому
How about compare to VLIW , like TransmetaCrusoe ? why that processor did not success
@veeYceeY
@veeYceeY 3 роки тому
It is hard to create good compiler for VLIW
@perforongo9078
@perforongo9078 2 роки тому
VLIW processors do a lot of things in software that would otherwise be done in hardware. Each instruction has a packet of information in it that pre-computes what else is coming that allows the processor to process that information more effectively. It has its advantages, fewer transistors are needed, and at one point it made pipelining easier to do. But, this increases the need for memory by quite a lot. Also, the information contained in VLIW instructions ended up not being as necessary since chip designers figured out ways to make superscalar pipelined processors without using as much of the transistor budget. They're also super hard to design compilers for. The compiler isn't just translating code for a VLIW architecture, it is outright preprocessing data that would otherwise be done on chip. That makes compiler code really complicated to write.
@morthim
@morthim 5 років тому
it is a bit too complex for a lay-programmer and a bit too simple for someone familiar with the tech. it would have helped to present the content with short supporting expressions to anchor the content. for example the 'fusion to extreme' slide has a bunch of ungrounded jargon. as are the rectangles with triangles on them. also what is the importance of the instructions? is it instructions to do a task or instructions available or what?
@TheChronichalt
@TheChronichalt 4 роки тому
All you need is 3 chapters in a Computer Design course to understand everything he said...
@morthim
@morthim 4 роки тому
@@TheChronichalt which three and what book?
@TheChronichalt
@TheChronichalt 4 роки тому
@@morthim Computer Organization and Design - RISC-V edition
@prdoyle
@prdoyle 2 роки тому
The level worked for me.
@user-ww2lc1yo9c
@user-ww2lc1yo9c 6 років тому
this guy should slow down a bit
@ZelenoJabko
@ZelenoJabko 5 років тому
I was watching the talk at 2x speed, you loser.
@aleksandetatishvili3804
@aleksandetatishvili3804 5 років тому
@@ZelenoJabko I was going to say the same... Mostly explanations are so slow. I was very happy when i first found out that i could go 2x faster years ago.
@andrewlankford9634
@andrewlankford9634 5 років тому
3:00 ARM is CISC?
@JB52520
@JB52520 5 років тому
He explains it at 3:23.
@jeffondrement160
@jeffondrement160 4 роки тому
Compared to RISC-V, probably. :lol: RISC-V is the 5th implementation of Berkeley RISC architecture, it's more fidel to the RISC philosophy than ACM (Average CISC Machines). ^^
@ChrisDreher
@ChrisDreher 4 роки тому
His claim that ARM is CISC was seriously weak and distracts from the core point he was making
@davidcagle4735
@davidcagle4735 5 років тому
ARM does NOT have micro-ops, it is all RISC ISA (even the LDM ops that you complained of are stated ops, the only thing close to CISC and it is NOT Micro-op). All ARM instructions are hardware tree decode, in other words RISC, and all ARM instructions stay what they are through the pipeline. There is a reason that ARM is a load/store arch (the entire ARMv7/ARMv8 AARCH32 ISA can be completely described including opcodes on two sheets of paper at 12 point [ARM not coprecessors]). I would not call AARCH64 ARM, as it shares nothing with the 32-bit ARM ISA. The example ops you are giving at 18 minutes 10 seconds for ARM 32-bit, are NOT ARM, they are AARCH64. There has never been proof that out of order multiple issue can beat (or even keep up with) in order multiple issue operation, for well optimized code (giving the same width of multiple issue pipeline in both cases), and in order multiple issue is a lot simpler to implement and optimize for VLSI.
@erikengheim1106
@erikengheim1106 3 роки тому
This guy says some ARM CPUs have micro-ops like behavior. This is over my head, but it seems like ARM operates on some of the same principles as a CISC, but that micro-ops don't work the same way because they are all in hardware (no micro sequencer?) superuser.com/questions/934752/do-arm-processors-like-cortex-a9-use-microcode Ah well maybe I have to sit down and read a CPU architecture book in detail to understand this stuff. I get the really basic stuff like how an ALU, works, how addressing, fetching from memory, various gates, simple decoders etc. But I don't really understand much of how modern CPUs work except in the very abstract.
@THB192
@THB192 2 роки тому
This is fundamentally confusing the issue. Whether or not you "have micro-ops" is actually entirely orthogonal to whether your ISA is RISC or CISC: it's a detail of the micro-architecure. But more to the point, ARM *has micro-ops*. And no, that's not me saying that, that comes from the Arm Cortex-A57 Software Optimization Guide, which explicitly states that instructions are "decoded into internal micro-operations". It's not *micro-coded* (micro-ops are directly emitted by the decoder), but it uses micro-ops. AArch64 *is* ARMv8. That is from ARM themselves. The slide you mentioned is talking about ARMv8. Hence it uses AArch64 instructions. As for your comment about out of order vs in order multiple issue, you assume that programmers and compilers generate perfect code, or even *good* code. This assumption does not even remotely hold up to scrutiny.
@kenatkenichikato
@kenatkenichikato 4 роки тому
Slow down pal
@AarshParashar
@AarshParashar 4 роки тому
So how the ARMv8 is compared to RISC-V 64bit? Is it better or worse?
@nextlifeonearth
@nextlifeonearth 3 роки тому
Yes.
@perforongo9078
@perforongo9078 2 роки тому
RISC-V is a clean sheet design. It's better.
@ChrisDreher
@ChrisDreher 4 роки тому
Interesting topic but has flaws. 1. If you're going to make bold claims like that ARM is CISC, back it up. Finding 1-2 questionable instructions isn't enough (i.e. 3:43 isn't enough). If the claim was intended to be ARM 64b is CISC but 32b was RISC, then that should have been stated. The way it was presented in the talk sounded more like something between a personal opinion that distracts from the talk to perhaps a marketing statement like when Intel tried rebranding their x86 processors as "CRISC" to jump on the 1990's RISC market surge. 2. Not including ARM T32/Thumb (a 2-4 byte ISA) in the comparison against RV64GC (a 2-4 bytes ISA) was a significant omission. It would have been interesting to see whether the "28% fewer instruction bytes" result would have held up with ARM T32 code. Either T32 should have been included, RV64GC omitted, or an explanation as to why T32 was omitted. Note: I say the above as someone who wants to see more RISC-V in the marketplace.
@afterthesmash
@afterthesmash 3 роки тому
The RISC/CISC debate in its strong form is morally bankrupt and always has been. _The one-drop rule is a social and legal principle of racial classification that was historically prominent in the United States in the 20th century. It asserted that any person with even one ancestor of black ancestry ("one drop" of black blood) is considered black._ There's simply no way to classify the original ARM move multiple as anything but a complex instruction. I've seen a photo of the original ARM1 layout. There's a visible block devoted to sequencing move multiple. The motivation for this was that a pure RISC design at the time could generate at most one 32-bit bus read/write cycle per 32-bit bus cycle of instruction fetch. In Unix, copy on write is the backbone of an efficient process fork. The page is shared until suddenly it isn't, and then you have to do a very rapid copy of a 4 KB page to a fresh 4 KB page to unshare it again. Without move multiple, you could only do this at half of your memory bus bandwidth, because the other half was devoted to instruction fetch (no icache). That single complex addition to the original RISC instruction set worked around having no icache at all without paying a 50% speed penalty on memory zero or memory copy. Fast forward 35 years, the entire design is tainted by the standard of Racist Instruction Set Computing and the application of the one-drop rule. It's simply insane that this practice continues. I put that into the starkest possible terms to make a point. Before icache became universal, every worthwhile instruction set employed some wheeze or other to accelerate memcpy. Tom Moertel has a blog post about hacking a memory to screen copy for a game on the 6809 to use PULS/PSHU (push and pop multiple, using two different stack pointers) to copy 14 bytes fetching only a pair of 2-byte instructions (unfortunately, in the general case you have to disable interrupts to use this hack, as your system stack pointer is otherwise occupied for the duration, and it had the unfortunate side-effect of partially reversing byte order due to pushing on one side and popping on the other, which also had to be worked around). ARM1-which followed 7 years later-provided this same technique without the extreme caveats. The Z80 had an LDIR instructions which is forerunner of MOVSB from x86. But it took a ghastly 21 cycles to copy one byte before branching back to self to copy the next byte (forcing another instruction read cycle onto the bus, but at least it didn't faff with your interrupt handling). Recent reverse engineering reveals that the Z80 only had a 4-bit ALU internally, so it was secretly a DX2 on the inside, double pumped with respect to the bus cycle (which is why it was always clocked twice as fast as the 6502, but achieved nowhere close to twice the performance). The $$$ solution on a Z80 was to add the Z8410 DMA memory controller chip, which achieved a 10× speed improvement over memcpy coded for CPU. It really was severely irritating to an old-timer when Chris Celio stood there gloating about his icache-enabled "purity" by glibly invoking the one-drop rule to taint the entire ARM1 design for a single, entirely sensible hack to double memcpy performance, while simultaneously gloating about his macro-op fusion hack, entirely motivated by sizing up his own competitive landscape of the 2010s, to service a vastly smaller performance deficit in far less important edge cases. It also annoyed me when he did essentially the same thing with x86, which is lugging around 45 years of legacy compatibility. C++ also lugs around nearly 50 years of legacy compatibility with language design decisions originally made in 1970 and is rarely criticized by anyone coming from any other language camp without braying about the legacy cruft to the exclusion of intelligent analysis, rather than steel-manning the language it has now become (there being no shortage of things to complain about, even as best construed, so WTF with these simpleton bleatings?) Conceptually, it would be trivial to invent an x86 "hand" execution mode (ARM going to thumb mode increased instruction length variability, x86 going to hand mode would decrease instruction length variability). x86 would need three instruction lengths: 16, 32, and 48 because you still need to encode disp32 (four byte immediate constant). You might discover you also need a 64 bit format to encode every valid prefix bit in combination with every possible instruction with a disp32 payload. But this doesn't demand anything more of your decoder than macro-op fusion of a pair of 16/32 bit instructions. Many x86 instructions are already far more implicitly fused than RISC-V will ever attain. On the way through, you could also superset things so that nearly every register can be used in nearly every mode, so the orthogonality consideration would also largely disappear (xhand mode). Then what are you left with as the essential differences? A) x86's bizarrely erratic handling of the condition code register; B) weird aliasing of short registers on top of long registers (AL,AH,AX,EAX); C) those nasty segment registers; D) a small register file supplemented with two-operand indexed rmw instruction formats, which affect memory, but don't affect the register file. You'd probably solve (A) by adding bits to most arithmetic instructions which specify that the condition code register runs in legacy mode (partial update), update nothing (new), or update everything (new). This is no more bloat that the predication feature of ARM1. The Pentium Pro tore its hair out over partial updates to the condition code register. It's a total disaster for OOO. Bottom line: EFLAGS needs to die in a fire. But you could fix 80% of the problem with 20% of the work by adding a mode bit to specify flag register update mode: legacy, none, all. (B) you can't do anything about while retaining existing x86 compatibility. It also amounts to partial register updates and complicates OOO score-boarding considerably. (C) is one of the ugliest compromises in computing history. Modern Unix kernels set up the segment registers as part of initializing the virtual memory system on entry to protected mode and then mostly leave them alone forever (which you can't quite do while handling page faults in the kernel for obscure technical reasons in the design of the integrated MMU, but you can come awfully close). (D) concerning memory as the direct target of ALU instructions is the only aspect that's central to what distinguishes the x86 pseudo-RISC kernel from a true RISC kernel. If some person complaining about x86 doesn't mention this, his or her argument is half-baked. Here, again, Chris was not as forthcoming as he ought to have been. He actually comments on write ports to the register file as a pertinent modern design issue with complex trade-offs. x86 requires fewer write ports through its unique capacity to exploit the dcache as part of a (hugely) extended register file. The rmw instruction family in x86 is a bit like zero page in 6502/6809, as both of these allow memory to substitute for registers you don't have at far less cost than you would otherwise experience. The rmw instructions form a computed address on the fly-without committing this to a named register-and then operate on the memory location (both a read and a write), also without committing this to a named register file. This is why the register colouring algorithm for the original x86 ever survived to live another day, despite the gross inadequacy of the named register file. What does end up a bit stressed out in silicon is what the Pentium Pro used to call the MOB: memory order buffer. A lot more addresses need to be checked for ordering requirements (mostly use of overlapping memory addresses in close succession). I once read a discussion by a core member of the Athlon design team who said that this was almost a blessing in disguise. In a pure RISC design, you have to perform virtual address translation twice: once on read, again on write. In implicitly fused rmw on x86, you only need to perform virtual address translation once. And so the final score: a busier (and hotter) MOB, but a less busy (and less hot) TLB. There's a scoreboarding penalty to pay for this, but there's also a scoreboarding penalty to pay for OOO in pure RISC, too. Chris Celio is one of the few people I've ever listened to who is competent to talk about this tradeoff in specific terms, but he didn't go there, because he was too busy being glib about his legacy-free view of the world. How to tell if a RISC zealot is playing conscientious hardball in trashing x86: The presentation includes a slide on TLB access intensity, which is one aspect where pure RISC pays a silly 100% penalty on an operation as simple as incrementing a global event counter that has no business being coloured into a live register. Just about any time I hear a talk bragging about some simplicity or another, the speaker operates with his or her brain a full octave down from deep fat fry, because-is seems-there's no point crowing about simplicity if it looks like you're actually working hard while you do it. This completely drives me bananas. To a reliable first approximation, complex trade-offs are _never_ simple. Almost every simplicity worth having is the result of working really, really hard.
@monetize_this8330
@monetize_this8330 4 роки тому
Why are they arguing about this in 2019? We all deserve more effort on fixing speculative execution and side-channel vulnerabilities.
@pichinpichi
@pichinpichi 3 роки тому
Because the conference took place in 2016 and they haven't used time machine back then as commonly as we are used to doing nowadays.
@joaobonnassis3806
@joaobonnassis3806 3 роки тому
Hi, we need to test the actual workloads by evaluating serialized, massively parallel, and batch applications. SPECint 2006 does not reflect these new applications.
@davidcagle4735
@davidcagle4735 5 років тому
You are comparing C compiled code, not real hand coded to the same skill on each platform. So your comparisons do not hold any water, because the optimizations for the target by the compiler will very wildly from arch to arch.
@ZelenoJabko
@ZelenoJabko 5 років тому
Why does this not have more likes?
@arjenroodselaar1495
@arjenroodselaar1495 5 років тому
The point here is that you rarely write assembly by hand. While yes, you may be able to go faster with carefully handcrafted instruction sequences on CISC devices, RISC reduces the need to do so by allowing compilers to produce fast/close to optimal instructions sequences by default. And because you have a smaller set of common instruction sequences you can then further optimize those at the silicon layer if you want to.
@erikengheim1106
@erikengheim1106 3 роки тому
Yes if one compiler is poorly optimized this comparison will not hold water. However in this case we have to assume x86 and ARM are better optimized than RISC-V. That allows us to get a lower bound for RISC-V performance. Hand coding would be pointless in this exercise, since the whole rational behind RISC is to move a lot of what the CPU does over to the compiler. RISC is only viable if compilers can be made to utilize RISC instruction sets.
@Lithiumbattery
@Lithiumbattery 3 роки тому
ARM architecture is better.
@chochooshoe
@chochooshoe 6 років тому
seriously, slow down...no need to talk so fast...
@ZelenoJabko
@ZelenoJabko 5 років тому
Maybe just speed up your slow brain. I watched the talk at 2x speed, no problem.
@Waitwhat469
@Waitwhat469 5 років тому
watching on .75 speed is bearable
@Waitwhat469
@Waitwhat469 5 років тому
@@ZelenoJabko Lol yeah, just do that :p :)
@JB52520
@JB52520 5 років тому
Adrenaline is a bitch.
@xybersurfer
@xybersurfer 5 років тому
i don't think that these comparisons are very useful. it's not clear whether you are bench marking the expressiveness of the ISAs or their performance. i also don't think you should be using C compilers to evaluate ISAs
@erikengheim1106
@erikengheim1106 3 роки тому
He was comparing number of instructions and number of micro-ops which seems like a good way of comparing the ISAs. I don't see how you concluded this was benchmarking. And why should you not use C compilers to evaluate ISAs? They are one of the most used and most optimized compilers out there. The challenge here is to figure out which ISA has instructions organized such that you can produce a minimal number of instructions or code size for a given problem. Maybe you have some good well thought out objections and alternatives. But since you don't mention either, you add no value to the discussion.
@xybersurfer
@xybersurfer 3 роки тому
@@erikengheim1106 at 9:53 he mentions 12 benchmarks and at 11:35 he shows his data (named "benchmarks"). so this is benchmarking. you are assuming that the compiler has been optimized enough. i'm not that optimistic. i think he is relying too much on compiler writers. as he himself says at 10:33, you can see bigger differences between compiler versions than ISAs. i don't need to hold your hand through the video, to add value
@erikengheim1106
@erikengheim1106 3 роки тому
> at 9:53 he mentions 12 benchmarks and at 11:35 he shows his data (named "benchmarks"). so this is benchmarking. I agree, but that was not the gist of my statement. You had failed to pick up whether it was a performance benchmark or benchmark on number of instructions produced. I clarified to you that it was the latter. My pleasure, you are welcome. > you are assuming that the compiler has been optimized enough. How does it matter? You don't grasp the logic of the argument. Let me spell it out clearly. It is fair to assume that in terms of optimization we have intel > ARM > RISC V. In then logically follows that if RISC V is able to match or exceed the intel output, then the guys have made their case. > i'm not that optimistic. i think he is relying too much on compiler writers. as he himself says at 10:33, you can see bigger differences between compiler versions than ISAs. The exact performance is irrelevant as long as it is safe to assume optimization can be assumed to be intel > ARM > RISC V. As long has he can demonstrate the the compiler produce equal or better code for the least optimized ISA, then he has made his case for the superiority of the ISA he is making. > i don't need to hold your hand through the video, to add value You where the one who didn't pick up that the benchmarks was on instruction count. Nor did you grasp the gist of the argument so don't lecture me. Besides my complaint had nothing to do with what was in the video. It was about your claim that C compilers where a poor choice. Something you have yet to make the case for.
@xybersurfer
@xybersurfer 3 роки тому
@@erikengheim1106 > My pleasure, you are welcome no, you did not clarify that it was a benchmark on the number of instructions, by simply mentioning that you didn't see how it was benchmarking. you did clarify your point by mentioning the least optimized compiler (intel > ARM > RISC V). it's a good point. i wouldn't say it logically follows, but it seems like a decent heuristic. also, if the main point of the presentation is not clear, then there is room for improving it. > you add no value to the discussion however, i think your tone is very judgemental. there is no need to be so unpleasant. after reading this, i simply started responding in kind. > You had failed to pick up whether it was a performance benchmark or benchmark on number of instructions produced here you simply reworded my opening post and made it more judgemental, because now "i failed". i don't enjoy communicating like this.
@erikengheim1106
@erikengheim1106 3 роки тому
@@xybersurfer I am sorry my initial response was harsh. I think the general combative and hostile attitude of social media gets to me, and I read something different into your initial statement, that what you actually said. It is easy to see somebody write something that looks like what some asshole wrote earlier and then generalize with broad strokes. I hate to see I am turning into the very thing I hate about social media. But thanks for pointing out how you felt my tone was. One cannot improve unless, one is aware of ones mistakes ;-) Also if it is any comfort I did not intent to be quite as harsh as it may have seen. I am a Nordic and we tend to speak in very direct and blunt ways which I have noticed often offends people in the Anglo-Saxon corner of the world. I do my best, but a lot of American ways of speaking, still makes no sense to me.
@matthewcory4733
@matthewcory4733 7 років тому
CISC is a complete disaster and the Itanium people are living in the past but since you are a university person, you don't have very deep knowledge. The Russians (not a fifth-rate leftist university like Berkeley) programmed KolibriOS to show that high-level languages are INCREDIBLY MORE of the bottleneck in modern computer performance. The university loves its glorified cache missing (FP) and tower of babel of programming languages. It's really stupid stuff only advocated by professors in the embarrassing CS departments. Acorn computers was founded where? West Coast flakes are behind.
@BattousaiHBr
@BattousaiHBr 6 років тому
...okay, i guess?
@MetroidChild
@MetroidChild 6 років тому
I'm not sure why you're relaying what was already said in the video above? He said instruction sets doesn't matter if you miss the cache a bunch or spinlock awaiting input, I can't tell if you just want to sound angry or the fact this video highlights the fact CISC and RISC in their modern versions are similar actually made you angry.
@Wren6991
@Wren6991 6 років тому
"The university loves its glorified cache missing (FP) and tower of babel of programming languages." Not sure which talk you watched. The one I saw was all about the density and performance of compiled C code.
@PauloConstantino167
@PauloConstantino167 5 років тому
go back to the rock you came from in Russia. The west invented computers and russians are out of the conversation. bye bye
@David_Phantom
@David_Phantom 5 років тому
The question I have is how you managed to/why you attacked liberals while commenting about how bad CISC is and how modern systems are poorly optimized. Everyone is entitled to their own thoughts and opinions, but really, was that necessary? The comment just makes you look like someone who wants to be angry for the sake of being angry.
The Genius of RISC-V Microprocessors - Erik Engheim - ACCU 2022
1:01:17
ACCU Conference
Переглядів 82 тис.
Jim Keller on AI, RISC-V, Tenstorrent’s Move to Edge IP
48:14
EE Times
Переглядів 41 тис.
Breaking the x86 Instruction Set
44:29
Black Hat
Переглядів 355 тис.
Dr. Ian Cutress Explains The Hype Around RISC-V
13:32
PCWorld
Переглядів 80 тис.
EEVblog 1524 - The 10 CENT RISC V Processor! CH32V003
19:55
EEVblog
Переглядів 297 тис.
Explaining RISC-V: An x86 & ARM Alternative
14:24
ExplainingComputers
Переглядів 406 тис.
4. Assembly Language & Computer Architecture
1:17:35
MIT OpenCourseWare
Переглядів 685 тис.
What are the differences ARM, x86 or RISC-V?
23:48
DJ Ware
Переглядів 23 тис.
Jim Keller: Arm vs x86 vs RISC-V - Does it Matter?
10:11
TechTechPotato: Clips 'n' Chips
Переглядів 53 тис.
Как должен стоять ПК?
1:00
CompShop Shorts
Переглядів 411 тис.
Subscribe for more!! #procreate #logoanimation #roblox
0:11
Animations by danny
Переглядів 3,2 млн
Интел подвинься, ARM уже в ПК!
14:06
PRO Hi-Tech
Переглядів 136 тис.
All New Atlas | Boston Dynamics
0:40
Boston Dynamics
Переглядів 4,9 млн