Memoization: The TRUE Way To Optimize Your Code In Python

  Переглядів 96,664

Indently

Indently

Рік тому

Learn how you can optimize your code using memoization, a form of caching computations that have already been made in recursive functions. Incredibly useful and can really optimize slow functions.
Learn about decorators here: • HOW TO USE DECORATORS ...
▶ Become job-ready with Python:
www.indently.io
▶ Follow me on Instagram:
/ indentlyreels

КОМЕНТАРІ: 142
@maroofkhatib3421
@maroofkhatib3421 Рік тому
Its good that you showed working of the memoization, but there are some inbuilt decorators for this exact same process, we can use cache or lru_cache from functools library. So that we don't need to write the memoization function every time.
@Indently
@Indently Рік тому
True
@abdelghafourfid8216
@abdelghafourfid8216 Рік тому
also have more robust mapping than key = str(args) + str(kwargs) which is very risky. and more efficient if the standard library used some optimisations with C for the caching functions. So yeah there is really not too many reasons to create your own caching
@d4138
@d4138 Рік тому
@@abdelghafourfid8216 what would be a more robust mapping? And why the current one is not robust?
@abdelghafourfid8216
@abdelghafourfid8216 Рік тому
@@d4138 imagine a function with two arguments `arg1` and `arg2` the current mapping will confuse (arg1="12, 45", arg2="67, 89") with (arg1="12", arg2="45, 67, 89") and ofc you can find infinite other cases like this. this behavior is certainly not what you want in your code. you can make it safer by including the arguments name and also make sure your mapping dont confuse different object types. So I'll just recommend to use the built in caching function which you can safely trust without worrying about the implementation.
@capsey_
@capsey_ Рік тому
@@abdelghafourfid8216 i agree with your point that `str(args) + str(kwargs)` isn't great due to many reasons, but your example of confusing arguments is not one of them, because repr of tuple and dict (which are what args and kwargs are respectively) will automatically add parenthesis, curly brackets and quotation marks around them. def func(*args, **kwargs): print(str(args) + str(kwargs)) func("12, 45", "67, 89") prints ('12, 45', '67, 89'){} func("12", "45, 67, 89") prints ('12', '45, 67, 89'){} func(arg1="12, 45", arg2="67, 89") prints (){'arg1': '12, 45', 'arg2': '67, 89'} func(arg1="12", arg2="45, 67, 89") prints (){'arg1': '12', 'arg2': '45, 67, 89'}
@Bananananamann
@Bananananamann Рік тому
To add to this nice video: Memoization isn't just some random word, it is an optimization technique from the broader topic of "dynamic programming", where we try to remember steps of a recursive function. Recursive functions can be assholes and turn otherwise linear-time algorithms into exponential beasts. Dynamic programming is there to counter that, because sometimes it may be easier to reason about the recursive solution.
@Indently
@Indently Рік тому
Very well said!
@Bananananamann
@Bananananamann Рік тому
@@Indently Great video though, I learned 2 new things! That we can create our own decorators easily and how easy it is to apply memoization. I'm sure I'll use both in the future.
@HexenzirkelZuluhed
@HexenzirkelZuluhed Рік тому
You do mention this at the end, but "from functools import lru_cache" is a) in the standard library b) is even less to type and c) can optionally limit the amount of memory the memoization-cache can occupy.
@rick-lj9pc
@rick-lj9pc Рік тому
Memoization is a very useful technique, but it is trading off increased memory usage to hold the cache to get the extra speed. In many case it is a good tradeoff, but it could also use up all of your memory if overused. For the fibonacci function an iterative calculation is very fast and uses a constant amount of memory.
@capsey_
@capsey_ Рік тому
I remember one time I was tinkering around with memoization of fibonacci function and it was so fast i was kinda frustrated how effective it was and for curiosity went for higher and higher numbers to see if it will ever start slowing down at least for half a second and when i went for billionth fibonacci number my computer completely froze and i had to physically shut it down 💀
@Trizzi2931
@Trizzi2931 Рік тому
Yes for Fibonacci the iterative solution is better in terms of space complexity. But generally in dynamic programming both the top down (Memoization) and bottom up (iterative) solutions have same time and space complexity because the height of the recursion tree will be same as the size of the array which you create for iterative solution which better than brute force solution or normal recursion.
@ResolvesS
@ResolvesS Рік тому
Or instead, you don't use the recursive or the iterative approach. Fibonacci can be calculated by formula in constant time and with constant memory
@thisoldproperty
@thisoldproperty Рік тому
Let's be honest, there is a Fibonacci formula that can be implemented. The point is ideas on caching, which id like to see this expanded on. Great intro video to this topic.
@zecuse
@zecuse Рік тому
Memory can be less of an issue if the application allows you to eliminate less frequently used cache items. In the fibonacci case, the cache is really just generating an iterative implementation (backwards) and looking up the values.
@7dainis777
@7dainis777 Рік тому
Memoization is very important concept to understand for code performance improvement. 👍 I have used different approach in the past for this exact issue. As a quick way, you can pass a dict as second argument, which will work as cache def fib(numb: int, cache: dict = {}) -> int: if numb < 2: return numb else: if numb in cache: return cache[numb] else: cache[numb] = fib(numb - 1, cache) + fib(numb - 2, cache) return cache[numb]
@swelanauguste6176
@swelanauguste6176 Рік тому
Awesome video. This is wonderful to learn. Thanks, I really appreciate your videos.
@erin1569
@erin1569 Рік тому
Maybe some people don't realize why it's so good with fibonacci and why they aren't getting similar results with their loops inside functions. This caches the function return (taking args and kwargs into account), which is mega helpful because the Fibonacci function is recursive, it calls itself, so each fibonacci(x) has to be calculated only 1 time. Without caching, the fibonacci function has to calculate each previous fibonacci number from 1, requiring rerunning the same function(x) a huge number of times.
@castlecodersltd
@castlecodersltd Рік тому
This helped me have a light bulb moment, thank you
@adventuresoftext
@adventuresoftext Рік тому
Definitely helping to boost a bit of performance in my massive open world text adventure I'm developing. Thank you for this tip!
@nameyname1447
@nameyname1447 Рік тому
Drop a link?
@adventuresoftext
@adventuresoftext Рік тому
@@nameyname1447 link for what?
@nameyname1447
@nameyname1447 Рік тому
@@adventuresoftext A link to your project. Do you have a Github pepository or Replit or something?
@adventuresoftext
@adventuresoftext Рік тому
@@nameyname1447 oh no it's not released yet, it's still got quite a bit of work just a few videos on this channel
@nameyname1447
@nameyname1447 Рік тому
@@adventuresoftext Alright cool! Good Luck with It!
@IrbisTheCat
@IrbisTheCat Рік тому
key creation here seems risky, as in some odd cases 2 different (k)wargs can end up as same key. Example: Args 1, kwargs "2", args 12, kwargs 12 empty strings. Would recomend adding specjal character between args and kwargs to avoid such thing.
@tucker8676
@tucker8676 Рік тому
That would also be risky: what about args 1 and 2 vs 12? Or args with the special character? If your args and kwargs are hashable you could always index with the tuple (*args, InternalSeparatorClass, **kwargs as tuple pairs). The most reliable and practical way is really to use functools.cache or a variant, which does what I just described internally.
@Bananananamann
@Bananananamann Рік тому
The key creation is very use-case dependent and should be thought about, true. For this case it works well.
@jcdiezdemedina
@jcdiezdemedina Рік тому
Great video, by the way witch theme are you using?
@YDV669
@YDV669 Рік тому
That's so neat. Python has a solution for a problem called Cascade Defines in the QTP component of an ancient language, Powerhouse.
@xxaqploboxx
@xxaqploboxx 15 днів тому
Thanks a lot this cntent is incredible for junior python devs like me
@adityahpatel
@adityahpatel 5 місяців тому
how is memoization different from the lru_cache u discussed in another video?
@wtfooqs
@wtfooqs Рік тому
used a for loop for my fibonacci function: def fib(n): fibs = [0,1] for i in range(n-1): fibs.append(fibs[-1]+fibs[-2]) return fibs[n] ran like butter even at 1000+ as an input
@skiminechannel
@skiminechannel Рік тому
In this case you just implement the memoization directly into your algorithm, which I think is the superior method.
@issaclifts
@issaclifts Рік тому
Could this also be used in a while loop for example?: while a != 3000: print(a) a+=1
@IrbisTheCat
@IrbisTheCat Рік тому
Doesn't seem to. It is to memorize result of function with same arguments called over and over with same result calculated.
@oskiral320
@oskiral320 Рік тому
no
@4artificial-love
@4artificial-love Рік тому
#fibonacci time_start = time.time() print('st:', time.time()) a, b = 0, 1 fb_indx = 36 ctn=0 while ctn != fb_indx: print( b, end=' ') a, b = b, a+b ctn+=1 print( ' ', 'fibonacci', fb_indx, ':', b) print(' ended:', time.time(), ' ', 'timed:', time.time()-time_start) #timed: 0.0030629634857177734
@tobiastriesch3736
@tobiastriesch3736 Рік тому
For primitive recursive functions, such as Fibonacci's series, tail recursion would also circumvent the issue with max recursion depth, wouldn't it?
@neilmehra_
@neilmehra_ 4 місяці тому
Yes, and by definition any tail recursive function can be trivially converted to iterative, so you could go further and just implement an iterative version.
@91BJnaruto
@91BJnaruto Рік тому
I would like to know how you did that arrow as I have never seen it before?
@Indently
@Indently Рік тому
It's a setting in PyCharm and other code editors. If you look up "ligatures" you might be able to find it for your IDE.
@davidl3383
@davidl3383 Рік тому
thank you very much !
@anamoyeee
@anamoyeee 6 місяців тому
Why? In the functools module @cache decorator does the same thing and you don't have to write your own implementation but just from functools import cache
@MrJester831
@MrJester831 Рік тому
Another way to optimize your Python is to use a systems level language and to add bindings 🦀. This is why polars is so fast
@user-vb9mv9xb1x
@user-vb9mv9xb1x 9 днів тому
could you please explain where it store the cache?
@Yotanido
@Yotanido Рік тому
I actually think that this isn't that great of an example. This only works because the function recurses twice. Memoization is a great tool for pure functions that get frequently called with the same input. The recursive fibonacci definition happens to do this, but this is still not a great implementation. An iterative approach can be even faster and won't use up memory. You could even momoize the iterative implementation, for quick lookups of repeat inputs, but no wasted memory for all the intermediate values. Memoization is a powerful and useful tool, but it should be used when it is appropriate. In this case a better algorithm is all that is needed. (And you don't even need to change the recursion depth!)
@Indently
@Indently Рік тому
Please add resources to your claims so others can further their understanding as well :)
@Yotanido
@Yotanido Рік тому
@@Indently Looks like links do indeed get automatically blocked. I'm guessing you can fix that on your end.
@Indently
@Indently Рік тому
The example I gave might not be the greatest, but it surely was one of the easiest ways to demonstrate it. I appreciate your informative comment, it's definitely something interesting to keep into account :) Thanks for sharing! (I also unblocked the link)
@williamflores7323
@williamflores7323 Рік тому
This is SICK
@j.r.9966
@j.r.9966 Рік тому
Why is there not a new cache defined for each function call?
@OBGynKenobi
@OBGynKenobi Рік тому
I used the same technique to minimize AWS lambda calls to other services when it's the same expected value return.
@CollinJS
@CollinJS Рік тому
It should be noted that generating keys that way can break compatibility with certain classes. A class implementing the __hash__ method will not behave as expected if you use its string representation as the key instead of the object itself. The purpose of __hash__ will be lost and __str__ or __repr__ will be used instead, which are neither reliable nor intended to be used for that purpose. It's generally best to let objects handle their own hashing. I realize you can't cover everything in a video so I wanted to mention it. One solution would be to preserve the objects in a tuple: key = (args, tuple(kwargs.items())) Similarly, the caching wrapper in Python's functools module uses a _make_key function which essentially returns: (args, kwd_mark, *kwargs.items()) where kwd_mark is a persistent object() which separates args from kwargs in a flattened list. Same idea, slightly more efficient. As others have noted, I think you missed a good opportunity to talk about functools, but that may now be a good opportunity for a future video. Thanks for your time and content.
@Indently
@Indently Рік тому
I really appreciate these informative comments, they really make the Python community a better place, thank you for taking your time to write it! I will cover functools in a future lesson, I really wanted to get the basics on memoization out and about so people had an idea where to start, so I thank you once again for your patience, and hope to see you keep up with these informative comments around the internet :)
@velitskylev7068
@velitskylev7068 Рік тому
lru cache decorator is already available at functools
@Indently
@Indently Рік тому
That would be a great 10 second tutorial
@miltondias6617
@miltondias6617 Рік тому
It seems great to implement. Can I do it with any code?
@EvilTaco
@EvilTaco Рік тому
yes, but it's only going to help if that function is called a huge amount of times with the same arguments. The reason the first implementation was so slow is because it never saved the results it returned, so it calculated a shit-ton of values it has already calculated at some point previously
@shivamkumar-qp1jm
@shivamkumar-qp1jm 2 місяці тому
What is the difference between lru_cache and this
@mithilbhoras5951
@mithilbhoras5951 2 місяці тому
Functools library already has two memoization decorators: cache and lru_cache. So no need to write your own implementation.
@JorgeGonzalez-zs7vj
@JorgeGonzalez-zs7vj Рік тому
Nice!!!
@gregorymartin9091
@gregorymartin9091 Рік тому
Hello. Please could you explain me what's the most efficient between using @memoization and @lru_cache? Thank you, and congratulations for this really useful channel!
@HIMixoid
@HIMixoid Рік тому
Never heard of "memoization" before. But during the first seconds of video I just said "lol just use cache decorator". And then he started to implement it. "Same thing, different names" situation I guess. My guess is that main points of video are: - Show that such thing as caching/memoization exists. - Show how to implement it by yourself so you would have deeper understanding of how it works under the hood. So after you learn it you can even implement it in other languages where you don't have it "out of the box".
@QWERTIOX
@QWERTIOX Рік тому
As a cpp dev, using recursion instead of basic loop to calc fibonacci looks like overkill. Personally i will write it like 2 element table, one boolen to change postion where im setting newest calced variable and doing this in loop for n times
@nextgodlevel4056
@nextgodlevel4056 7 місяців тому
i love your videos
@revenity7543
@revenity7543 6 місяців тому
Why don't you use DP?
@djin81
@djin81 6 місяців тому
fibonacci(43) generates more than a billion recursive calls to fibonacci(n) using that code. Its no wonder fibonacci(50) doesn't complete, think how many times its generating a fibonacci(43) call which will add a billion more calls. There's only 50 unique return values, the cache version is 50 function calls vs literally billons of recursive function calls.
@miguelvasquez9849
@miguelvasquez9849 Рік тому
Awesome
@DivyanshuLohani
@DivyanshuLohani Рік тому
at 6:40 on line 28 10_000 what is that syntax?
@Indently
@Indently Рік тому
In Python you can use underscores as a separator when typing numbers. The compiler ignores it, but it's visually easy on your eyes.
@DivyanshuLohani
@DivyanshuLohani Рік тому
@@Indently Oh ok thanks
@frricbc4442
@frricbc4442 Рік тому
can someone explain to me '→' i am not familiar with it in python
@Harveyleegoodie
@Harveyleegoodie Рік тому
pretty sure it means to return the output of the function as listed for example: def main(s) -> int: makes it return the output as a int def main(s) -> str: makes it return the output as a str
@FTE99699
@FTE99699 10 місяців тому
Thanks
@Indently
@Indently 10 місяців тому
Thank you for the generosity! :)
@simonwillover4175
@simonwillover4175 Рік тому
2:40 Should take about 26 hours, looking at the time it took to do fib(30). Fib(40) would have been a better number to make your point.
@Dyras.
@Dyras. 3 місяці тому
implementation of memoization is very slow compared to if you used hashmap or 2d array but a nice starter to dynamic programming for beginners
@SP-db6sh
@SP-db6sh Рік тому
Use just prefect library
@FalcoGer
@FalcoGer Рік тому
you can easily make fib with loops instead of recursion, saving yourself stack frames, stack memory, loads of time and your sanity. Recursion should be avoided whenever possible. It's slow, eats limited stack space, generates insane, impossible to debug stack traces and is generally a pain in the rear end. Caching results makes sense in some applications. But fib only needs 2 values to be remembered. Memory access, especially access larger than a page, or larger than processor cache is slow in it's own right. An iterative approach also doesn't require boilerplate code for a caching wrapper. And of course you don't get max recursion depth errors from perfectly sane and valid user inputs if you don't use recursion. Which you shouldn't. The naive recursive approach takes O(n^2) time. The iterative approach only takes O(n). Memorization also takes this down to O(n), but you still get overhead from function calls and memory lookups. If you want fast code, don't recurse. If you want readable code, don't recurse. If you want easy to debug code, don't recurse. The only reason to recurse is if doing it iteratively hurts readability or performance, whichever is more important to you. The max recursion value is there for a reason. Setting an arbitrary new value that's pulled out of your ass isn't fixing any problems, it just kicks the can down the road. What if some user wants the 10001st number? What you want is an arbitrary number. Putting in the user's input also is a really bad idea. Just... don't use recursion unless it can't be avoided. Here are my results, calculating fibbonacci(40) on my crappy laptop. In [27]: measure(fib_naive) r=102334155, 44.31601328699617 s In [28]: measure(fib_mem) r=102334155, 0.00019323197193443775 s In [29]: measure(fib_sane) r=102334155, 2.738495822995901e-05 s As you can see, the non recursive implementation is faster by a factor of 10 again, and it will only get worse with larger values. Of course calling the function again with the same value for testing in the interpreter is a bit of a mess. obviously an O(1) lookup of fib(1e9999) is going to be faster than an O(n) calculation. fib_naive and fib_mem are the same except for using your implementation of the cache. fib_sane is def fib_sane(n: int) -> int: p = 1 gp = 0 for _ in range(1, n): t = p + gp gp = p p = t return p
@supwut7292
@supwut7292 10 місяців тому
You make great points through your post, however you missed the fundamental point of the vid. It wasn’t about whether the iterative approach is faster than the recursive approach. But rather the fundamental idea of caching and exploiting memory hierarchy. Furthermore, this is not just a theme in programming. This is a key part of computer architecture, software architecture, and processor design. It’s almost guaranteed for a recursive function to take longer due to its fundamental nature. However by exploiting caching we avoid the expensive costs of exhaustive memory look ups.
@weistrass
@weistrass 4 місяці тому
You invented the wheel
@gabrote42
@gabrote42 Рік тому
Whoah
@aniketbose4360
@aniketbose4360 Рік тому
Dude i know this video is to show memoization but in case you don't know there is a formaula for n'th fibonnaci and that's very simple too
@Indently
@Indently Рік тому
I know :) thank you for bringing it up though!
@tipoima
@tipoima Рік тому
Wait, so "memoization" is not a typo?
@Indently
@Indently Рік тому
A typo for what?
@tipoima
@tipoima Рік тому
@@Indently "memorization"
@Indently
@Indently Рік тому
@@tipoima oh yeah, ahaha, true, I also thought something similar when I first heard it.
@gJonii
@gJonii Рік тому
This is like, the usual talk about memoization but it's just slightly wrong everywhere. You don't use default libraries to import this functionality, you however don't write case-specific code for this case either, instead, you try to write generic library code very badly. Fever dream'ish quality to this video.
@AcceleratedVelocity
@AcceleratedVelocity 9 місяців тому
memory leak go BRRRRRRRR
@ThankYouESM
@ThankYouESM 6 місяців тому
lru_cache seems significantly faster and requires less code.
@bgdgdgdf4488
@bgdgdgdf4488 Рік тому
Lesson: stop using recursion because it's slow. Use while loops instead.
@romain.guillaume
@romain.guillaume Рік тому
I know it is for demonstration purpose but this implementation of the Fibonacci sequence is awful, with or without the decorator. Without the decorator you have a O(exp(n)) program and on the other end a memory cache which is useless unless you need all the Fibonacci sequence. If you want to keep a O(n) program without memory issue in this case, just do a for loop and update only two variables a_n and a_n_plus_1. Like this, it is still a O(n) program but your store only two variables, not n. I know that some people will say it is obvious and that example has been chosen for demonstration but somebody had to say it (if it is not already done)
@Indently
@Indently Рік тому
If you have a better beginner example for memoization, I would love to hear about it so I can improve my lessons for the future.
@idk____idk6530
@idk____idk6530 Рік тому
Man I'm thinking what if we use this function in Cython code 💀.
@7DYNAMIN
@7DYNAMIN 3 місяці тому
the better way might be to use generator functions in python
@robert_nissan
@robert_nissan 15 днів тому
poderoso script super🎉
@tmahad5447
@tmahad5447 11 місяців тому
Optimizing python be like Feed a turtle for speed istead using a fast animal
@Indently
@Indently 11 місяців тому
But your slow turtle won't beat my fast turtle in a race then, and people care about the fastest turtle in this scenario
@richardbennett4365
@richardbennett4365 Рік тому
I see his point. His code, if if cannot add up some numbers to Fibonacci(50), in less than a few seconds, he's got the wrong code, which is what he's demonstrating, or he's using the wrong computer programming language for this task. Everyone knows scientific problems are best handled in FORTRAN (or at least, C or Rust), and this problem is pure arithmetic. Python is not the right language for this problem unless of course memorize is used.
@SourabhBhat
@SourabhBhat Рік тому
That is only partially right. Even though FORTRAN is better suited for scientific computing, efficient algorithms are very important. Try computing fib(50) using recursion in FORTRAN for yourself. How about fib(60) after that?
@richardbennett4365
@richardbennett4365 Рік тому
@@SourabhBhat, you are correct 💯. The point is to make the algorithm as efficient as possible given the language with which one is presented.
@nempk1817
@nempk1817 Рік тому
dont use python = more speed.
@sangchoo1201
@sangchoo1201 Рік тому
fibonacci? use O(logN) method
@spaghettiking653
@spaghettiking653 Рік тому
You mean Binet formula?
@sangchoo1201
@sangchoo1201 Рік тому
@@spaghettiking653 matrix exponation
@JordanMetroidManiac
@JordanMetroidManiac Рік тому
Here’s an O(1) function. PHI = 5 ** .5 * .5 + .5 k1 = 1+PHI*PHI k2 = 1+1/(PHI*PHI) def fib(n): return int(.5 + .2 * (k1 * PHI ** n + k2 * (-1) ** n * PHI ** -n))
@sangchoo1201
@sangchoo1201 Рік тому
@@JordanMetroidManiac it's not O(1). the ** is O(n) and it doesn't work
@JordanMetroidManiac
@JordanMetroidManiac Рік тому
@@sangchoo1201 I accidentally gave the formula for Lucas numbers. Also the exponent operator is not O(n) lol
@Shubham_Shiromani
@Shubham_Shiromani Рік тому
from functools import wraps from time import perf_counter import sys def memoize(func): cache={} @wraps(func) def wrapper(*args, **kwargs): key= str(args)+str(kwargs) if key not in cache: cache[key]=func(*args, **kwargs) return cache[key] return wrapper def sum(n): s=0 for i in range(n): if i%3==0 or i%5==0: s=s+i return s t = int(input().strip()) for a0 in range(t): n = int(input().strip()) start= perf_counter print(sum(n)) end= perf_counter #For this code, it is not working-----------
@overbored1337
@overbored1337 5 місяців тому
The best way to optimize python is to use another language
@Indently
@Indently 5 місяців тому
Like Spanish?
@Fortexik
@Fortexik Рік тому
in functools there is a decorator @cache or @lru_cache
@CEREBRALWALLSY
@CEREBRALWALLSY Рік тому
Can functools.lrucache be used for this instead?
@JustMastermind
@JustMastermind Рік тому
yes
@sir_damnkrat
@sir_damnkrat Рік тому
It shoud
@richardboreiko
@richardboreiko Рік тому
That was interesting and effective. I tried using 5000 and got an error Process finished with exit code -1073741571 (0xC00000FD). I started looking for the upper limit on my Windows PC, and it's 2567. At 2568, I start to see the error. It may be because I have too many windows each with too many tabs, so I'll have to try it again after cleaning up my windows/tabs. Or it may just be a hardware limitation on my PC. Still, it's incredibly fast. Thanks! Also, I just checked for the error message on openai (since everybody's talking about it lately) and it said this: ======================================= Exit code -1073741571 (0xC00000FD) generally indicates that there was a stack overflow error in your program. This can occur if you have a function that calls itself recursively and doesn't have a proper stopping condition, or if you have a very large number of nested function calls. To troubleshoot this error, you will need to examine your code to see where the stack overflow is occurring. One way to do this is to use a debugger to step through your code and see where the error is being thrown. You can also try adding print statements to your code to trace the flow of execution and see where the program is getting stuck. It's also possible that the error is being caused by a problem with the environment in which your program is running, such as insufficient stack size or memory. In this case, you may need to modify the environment settings or allocate more resources to the program. If you continue to have trouble troubleshooting the error, it may be helpful to post a more detailed description of your code and the steps you have taken so far to debug the issue. =======================================
@dcknature
@dcknature Рік тому
This reminds of learning to multiply at school a long time ago 🧓. Thanks for the tutorial video 😊! likes = 57 😉👍
@4artificial-love
@4artificial-love Рік тому
i believe that simple is better...and faster... #fibonacci time_start = time.time() print('st:', time.time()) a, b = 0, 1 fb_indx = 10000-1 ctn=0 while ctn != fb_indx: #print( b, end=' ') a, b = b, a+b ctn+=1 print( ' ', 'fibonacci', fb_indx, ':', b) print(' ended:', time.time(), ' ', 'timed:', time.time()-time_start) #timed: 0.0.005985736846923828
@mx-kd2fl
@mx-kd2fl Рік тому
OMG! You can just use functools.cache...
@Indently
@Indently Рік тому
Right, there perfect way to teach how something works is by using pre-made functions
@Armcollector77
@Armcollector77 Рік тому
​@@Indently Thanks for the video, great content. No, you are right that it is not a good way to teach, but mentioning them at the end of your video is probably a good idea.
@Indently
@Indently Рік тому
@@Armcollector77 That part I can accept, I will try to remember for future lessons :)
@sesemuller4086
@sesemuller4086 Рік тому
In your fibonnacci implementation, f(n-2)+f(n-1) would be more efficient for the recursion because it reaches a lower depth sooner
@wishu6553
@wishu6553 Рік тому
Yea, but than caching arg:return like this wouldn't be possible. I guess it's just basic example
CONTEXT MANAGERS In Python Are GENIUS!
4:49
Indently
Переглядів 23 тис.
НЕОБЫЧНЫЙ ЛЕДЕНЕЦ
00:49
Sveta Sollar
Переглядів 6 млн
5 Good Python Habits
17:35
Indently
Переглядів 293 тис.
Unlocking your CPU cores in Python (multiprocessing)
12:16
mCoding
Переглядів 287 тис.
This Algorithm is 1,606,240% FASTER
13:31
ThePrimeagen
Переглядів 699 тис.
6 CLEAN Tips To IMPROVE Your Python Functions
9:55
Indently
Переглядів 67 тис.
5 Useful F-String Tricks In Python
10:02
Indently
Переглядів 228 тис.
What is "@total_ordering" in Python?
11:12
Indently
Переглядів 14 тис.
5 Tips To Write Better Python Functions
15:59
Indently
Переглядів 66 тис.
HOW TO USE DECORATORS IN PYTHON 2022 (THE RIGHT WAY)
6:42
Indently
Переглядів 50 тис.
5 Useful Python Decorators (ft. Carberra)
14:34
Indently
Переглядів 78 тис.
Python dataclasses will save you HOURS, also featuring attrs
8:50