TDD Is A BROKEN Practice
18:02
19 годин тому
Your Tests Are Failing YOU!
9:23
14 днів тому
Ensuring SCALABILITY Using MICROSERVICES
12:22
DON'T Comment Your Code
16:54
Місяць тому
How To Use TDD For UI Design
13:08
Місяць тому
BDD's REAL Role In Software Testing
4:38
How Walmart Achieved TRUE Agility
15:48
2 місяці тому
ALL Software Development Is Incremental
5:49
Testing Is Bad For Developer Productivity
8:07
The PROBLEM With DORA Metrics
8:33
2 місяці тому
The WORST Way to Develop Software
15:16
3 місяці тому
Where Agile Gets It Wrong
19:22
3 місяці тому
КОМЕНТАРІ
@kmac499
@kmac499 5 годин тому
The big thing for me with OO. which often seems to be overlooked, is instantiation of an object from ita class. Taking the GUI button example you gave, Write one class, instantiate 10 copies, set their 'values' to 0-9 and you've got the beginnings of a calculator..
@andrew-schutt
@andrew-schutt 8 годин тому
I'm gonna start watching this everyday to start my day. Thanks!
@JorgeEscobarMX
@JorgeEscobarMX 18 годин тому
Observability is a must that we don't really have at my job. I work as a data QA engineer. What tool could be used to generate dashboards that feed from actual database metrics, generated from the database engine or otherwise?
@alexeykrylov9995
@alexeykrylov9995 22 години тому
Both of them don't know how LLMs / "AI" work.
@user-sr1uj6pc1q
@user-sr1uj6pc1q День тому
As a hybrid Software Engineer/DevOps professional who has worked with both old-school and modern engineers, I've come to deeply admire the efficiency, resilience, and minimalism of old-school experts. These are individuals who might specialize in a few core areas, but they know them inside out. Their expertise typically revolves around: 1- Operating System Internals: They understand operating systems to the smallest details. Whether it's managing memory, handling system calls, or fine-tuning kernel parameters, their knowledge is comprehensive and profound. 2- Networking Fundamentals: They have a thorough grasp of networking, often excelling in a couple of key protocols like HTTP and TCP/IP. For them, web development is a sub-case of a sub-case-just another layer on top of their deep understanding of networking. 3- Minimalistic Coding Techniques: Their coding philosophy is rooted in simplicity and effectiveness. They avoid unnecessary abstraction and believe in writing code that's straightforward and does exactly what's needed. Mastering New Trends with Old-School Wisdom: What's truly fascinating about these engineers is their uncanny ability to adopt and adapt to new trends in the software engineering world. Each time a new "hype" emerges, whether it's microservices, containers, or serverless architecture, they quickly: - Understand Core Concepts: They grasp the underlying principles of the new technology and its theoretical foundations. - Strip Down Complexity: They filter out all the unnecessary buzzwords, abstractions, and complexity that often accompany new trends. - Implement Efficiently: Using their old-school techniques, they implement the required functionality in an elegant, minimalist manner. It's astonishing how they can replicate modern concepts in a way that often leaves me speechless. In one word: {YAGNI & KISS in DNA}
@br3nto
@br3nto День тому
To what extent should logs and metrics be part of our data model vs separate from?? In my mind, log files seem to represent all the things we want to know but haven’t incorporated into our data persistence model. In theory, each log represents an actual event in our system that should match to a well defined process. Logs seem lazy; and incomplete solution. If we instead log these well defined events to a database, we can then query join and filter that data using one solution instead of using a separate log technology.
@retagainez
@retagainez День тому
Well, as Dave mentioned, microservice developers might have different standards from team-to-team. If done separately from the data model, logging that data would be most useful if its all connected together in some form or another (correlation IDs so that you could query it) and that could require disciplined teams to do that. I agree it is lazy. In my experience, you'd be able to create a partial picture from the logs and query it using something like ElasticSearch, but it was hardly ever conclusive enough and mostly a scratch on the surface for something that needed to be further reproduced. This is THE problem and solved with a unique navigation of how teams are organized along with smart solutions that provide the necessary and exhaustive set of logs/metrics/traces for any particular event. I'm mostly drawing from my anecdote and working in an ElasticSearch logging system that had logs but still not valuable on its own without even more context and data. I guess one question would be how easy would it be to add testing for the logs associated with the transactional data? Whereas if you do it separately, you might not be able to test for the existence of logs. It might just require discipline. Maybe if your business is to sell data with logs associated with that data to your customers? Otherwise not sure.
@mrpocock
@mrpocock 17 годин тому
I think the intuition is that you can ingest those logs into batch jobs or micro services that are a good fit for that particular use of them. E.g for a particular dashboard widget or report.
@ContinuousDelivery
@ContinuousDelivery 13 годин тому
I think it really depends on the rigour with which you use logs. A RDBMs is a log-based system. They are implemented by keeping and processing a "Transaction Log" of changes that modify the data, so as long as you collected ALL of the significant events, you can accurately replay a complete picture. We did a similar things, in a totally different way, at LMAX building one of the world's highest performance financial exchanges. So Logs aren't necessarily a poor tool, but we often don't use them very well.
@jasondbaker
@jasondbaker День тому
Dave is absolutely right that observability is often left as an afterthought in many organizations. I attribute this to a few reasons. First, many organizations take the approach that building out observability is everyone's responsibility, but because there's no one individual or team providing strategic direction it fails to make much headway. Second, I rarely see observability requirements incorporated into developer stories or the "definition of done". It's common to see companies launch new services into production with little or no observability features in place. They might circle back later and add monitoring after an unplanned service outage. Finally, commercial observability tools are getting quite complex and many companies lack a training budget for their engineers. I can't tell you how many times I've walked into organizations and found that they were on their 3rd different observability platform in the past 5 years and they've only completed about 10% of the setup. Every year or two they decide that their current observability platform isn't providing any value so they go looking for a new one. Rinse and repeat.
@retagainez
@retagainez День тому
Anecdotal, but I agree. I've worked in a system where observability of things like resources and infrastructure health were there, but we wouldn't necessarily have anything like a "trace" to debug issues or "metrics" to track customer usage. For such a complex system where we needed to make new features quickly, it was odd to see such a lack in business intelligence. Most certainly the observability/monitoring was added AFTER an outage, I became familiar with the tools that observed the production system.
@SalihGoncu
@SalihGoncu День тому
I would like to see the results of the experimental development in the neigbouring powerplant. (Nuclear or non-nuclear) 😊
@Flobyby
@Flobyby День тому
I'm fairly sure today's shirt is specifically not a qwertee one
@Rcls01
@Rcls01 День тому
I already have 8 Qwertee shirts and they are the best. Found their site through this channel.
@mrpocock
@mrpocock День тому
Orderability rarely if ever requires times. You can often use a causal graph, so that later events reference some ID that came from an event/transaction that triggered them. Events that are input to a transaction are "before" in the causal graph, and events triggered by the transaction are "after". You don't always need distinct transaction log entries, if you don't have things that are triggered by multiple events.
@karsh001
@karsh001 17 годин тому
You can use vector clocks as well. Works in many fistributed systems.
@alessandro_yt
@alessandro_yt День тому
Great, thanks! Talking about tools: I'm having a good experience by using fluentd alongside with openobserve. I've tried kafka + elastic search with kibana before but was too much for my current context.
@ContinuousDelivery
@ContinuousDelivery День тому
📄 FREE MICROSERVICES HOW TO GUIDE: Advice to help you get started and focus on the right parts of the problem when you are creating a new microservices system. Includes tips on Design and Messaging. DOWNLOAD FOR FREE HERE ➡ www.subscribepage.com/microservices-guide
@RefactoringDuncan
@RefactoringDuncan День тому
Surmount barriers, Dave. You want to surmount them, not hit them 😉
@shaunbava1801
@shaunbava1801 День тому
I find Allen Holub to so often mirror my own observations, I've been complaining about metrics in complex human systems for years, I think they are often deleterious to end goal. Metrics can be useful but when removed people try to draw conclusions and set policy based on them people quickly game the metrics used and the metric no longer has any value. Plus we spend so much time, money and effort on measuring what cannot really be measured and then people expend effort to get the metric rather than focusing on the task at hand. Concrete metrics are easily measured and not easily gamed which is why to Allen's point it works well in manufacturing, if I measure my line's "shippable" products made per hour it is easy to determine if productivity went up or down and it is hard to game that metric. When dealing with knowledge work where every day is different and creativity tends to be the biggest ingredient how do you measure this? When you measure data can you apply it to teams across your organization? Here is the big question why are small enterprises often more efficient and capable than big organizations? Why can a 50 person start up produce a better product than a billion dollar firm? Look at who goes down the "metric" rabbit hole and tell me if your end goal is success is this who you want to emulate? I witnessed it first hand when my multibillion dollar employer started to force their "process" on a smaller firm they purchased. While it was running as it always had they were evolving their product, modernizing it, staying relevant. Once the heavy handed managing "productivity" came into the play the product started to falter and customers started to leave, they lost their best talent.
@shaunbava1801
@shaunbava1801 День тому
I'm living the McKinsey nightmare, it creates a toxic environment which inherently is bad for output. The metrics they focus on aren't really meaningful. They take some good ideas and really combine them with a lot of questionable ones. Looking at story points and flow metrics across developers or teams is utterly pointless as they tend to be a very imprecise measurement. Being forced to use flow metrics it tells me one thing from my observation and that is when looking at a developer I can see how many commits and their "personal" numbers tend to go up when they are excited about work and the product but down when the company hurts morale. Comparing people against each other doesn't really work as the best developers write less code, work on the "hardest problems", and some people may also be contributing to the project in ways not measured by these metrics. Good teams are built around smart, motivated individuals who are inspired and have psychological safety, managers who find people talents, both hidden and overt and putting people in a position to be successful. What McKinsey preaches is the exact opposite, it is find the "talent" and then push them hard, churn staff, layoff people, compensate the "top" people while not the bottoms in the hopes that you get churn. Their solution to everything is "more management", which isn't necessarily what is needed. I've watched senior management destroy high performing teams, creativity is a HUGE component and not easily measured. The best teams have managers people want to work for, people don't feel the burn out but rather are excited to work and feel their ideas have a shot of being built, managers give their people an opportunity and the people who "want" to be there are welcome, people are given an opportunity to produce and are only "managed out" at the discretion of the manager and financial realities. High trust organizations where people are given autonomy and leaders are allowed to lead and make their own decisions always outperform the ones operating by committee.
@guai9632
@guai9632 2 дні тому
every tdd evangelist out there: *talks for an hour - ... and here is how you can check that 2*2=4. the rest is up to you
@bristolfashion4421
@bristolfashion4421 2 дні тому
Thank you so much for this helpful explanation. I'd love to know more about the help desk data... calls to the desk must have recorded in detail, exactly what was going on in the Horizon system. Somebody, somewhere must have got up one day and decided to limit the spread of that data. I'd like to hear evidence from that person, in which they explain their decision.
@karsh001
@karsh001 2 дні тому
There are so many good points in this video. In my experience the main argument against TDD is cost. Writing tests are expensive. But so are documenting and doing proper risk or threat assessment of your design. Not to mention the costs related to a faulty deployment. Just considering the workflow I have been dealing with in some of my teams recently I fully appreciate that even mor BS is unwelcome. Here is an example from one team I worked with: When I started the process was something like this: read headline of use case -> code -> approve code -> deploy to prod The team was often wildly off target and the code had a lot of security issues. Due to risk-assessment requirements this had to change and with a bit of mentoring we managed to get the coders to do a napkin design, and have a call with the lead architect before implementing. Please note that there are no "document code" in our workflow. What is important is the design and risk assessment; the latter is based off of the threat model (we use STRIDE-TRIM). (User story + requirements) -> design -> treat model -> code -> test & approval -> prod Now; I am working on getting acceptance for TDD (we will eventually go towards DDD): (User story + requirements) -> design -> treat model -> write test -> code -> approval -> prod A nice benefit is that we have no over time any more. Everyone keeps within their 40 hours work week, and everyone gets their 5 weeks vacation pr year.
@briancolfer415
@briancolfer415 3 дні тому
Of course comments often will hurt. 1) they waste time, 2) How do you know if the comment or the code is correct if there is a contradiction, 3) they require maintenance when the code is refactored (combination of 1 and 2)
@mamyname
@mamyname 3 дні тому
We actually had the same issue exactly, management were looking for numbers, without regard to the story behind it
@MikeStock88
@MikeStock88 3 дні тому
TDD is a mindset not a unit test That's the biggest problem i see, people think tdd is writing unit tests I try to write tests as close to the user behaviour as possible. Less the tests know about the internals the easier it is to refactor
@only2sea
@only2sea 3 дні тому
Some folks think TDD is overengineering, but TDD prevents overengineering in general. You won't make anything you don't need, especially if you're doing TDD top-down.
@swaaagquan3540
@swaaagquan3540 3 дні тому
The section about pulling it out is very important. Smaller, faster and reliable is key. One example is that my work Api uses a gotenberg container to convert from various text formats to pdf files and other formats. We spin up a container, test the conversion of 20 different files in total and validate based on their conversion working. However it is slow, prone to 500s if it gets too much traffic from the test suite, takes ages to boot up and we have the default behaviour of returning the file if the accept header requests it. To address this, I pulled out all the tests that depended on conversion. This was replaced with static file responses from the container, meaning it's easier to explicitly test error cases too. Keep some that depended on it to run on a separate step of the CI process, one to be run on release builds or when the developer asked for it. And also used a fork of gotenberg to add the tests of the conversion so when using it in production, we have a lot more faith it will work as expected whenever go to upgrade. The hardest problem with flakey tests is senior or team leads who just don't care. Culture is always a way harder problem to fix over the actual code. Moving organisation when you've exhausted change efforts can be the best decision since being with a team who gets it wins out over endless uphill battles.
@georgehelyar
@georgehelyar 3 дні тому
Personally if I'm just doing a throw away proof of concept, I don't write tests at all, or even handle most of the error cases. The important part is that it's throw away and can never be "productionised", and trying to do so is doomed to fail. These pocs are just to quickly get an idea of what is possible and compare implementation details, not to produce production code.
@dafyddrees2287
@dafyddrees2287 3 дні тому
The first software jobs to be replaced will be tech recruiters. Even the real, live ones don’t know what they’re talking about, make snap decisions based on wrong assumptions and just spend most of their time employing sales tactics. If they behave in a manner indistinguishable from bots, I don’t see why not.
@gaiustacitus4242
@gaiustacitus4242 3 дні тому
You're onto something.
@victortodoran1828
@victortodoran1828 3 дні тому
Nice video, I see that it is a year old, has the view of the presenter changed over this year? Are there more recent videos on this topic from the presenter?
@orange-vlcybpd2
@orange-vlcybpd2 3 дні тому
A thought on the Expertise part 46:00. What about managers who "manage" a project? What is their expertise? Oftentimes managers do nothing else than delegate. Is that the expertise they posess? Delegating? The GOTO statement of the business world? Some say managers are decision makers. So their expertise in decision making? Or facilitating a decision making process regarding strategic points? But it is often the customer who makes the decision, so the manager is only the messenger? The Information bottleneck? Or may be they have the expertise on how to efficiently organize the working process? The GANTT Chart Masters, Human Conveyor Belt Constructors? This does not require any expertise. Todays developement barely requires process enforcement, as every developer comes with kanban and scrum workmodes preinstalled. A talking head for excuses when things go south? May be that. The XX - Xcuses Xpert. But In healthy teams with healthy relationships to the customer, even that is not a role. Because customers pain is always your headache too.
@phatster88
@phatster88 3 дні тому
.. interns.
@charlesdeuter
@charlesdeuter 3 дні тому
OOP is just functionality coupled to state. It's not a paradigm and much of the problems with modern programming is due to trying to turn it into a paradigm. If you are writing code that handles a LOT of state (games design comes to mind) functional programming can give you slightly more cohesion with only a bit of coupling. If you are contriving ways to couple state to functionality (self.logger = logger 🙄), you are committing a pretty bad anti-pattern.
@Glenningway
@Glenningway 4 дні тому
I can't find any one to show how "Platform Engineering" is done, as if they created this new thing for shareholders. It honestly sounds like cloud engineering, but for developers instead of IT Operations, so the dev half of "DevOps". Thing is, businesses are trying to save money by layoffs and downsizing, not throwing more money in IaC.
@br3nto
@br3nto 4 дні тому
6:27 LLMs will never replace programmers because they are missing two important features. They don’t understand the actual AST of a program and they cannot make manipulations, transformations, additions, subtractions of that AST. When I say AST I’m not meaning the actual AST of single language, but the entire generalised concept of what an AST would be for a multi-language program like we see in web apps with SQL, backend code, over the wire serialisations, HTML, JS, etc, etc.
@HoD999x
@HoD999x 2 дні тому
"we will never fly to the moon or have more than 640kb in our computers"
@br3nto
@br3nto 2 дні тому
@@HoD999x LLMs aren’t the only AI techniques. Maybe a sufficiently complex LLM could gain enough emergent properties to do it. However, I’m placing my money on different techniques that do the things mentioned in my comment more efficiently, simply, and cost effectively.
@JohnKerbaugh
@JohnKerbaugh 8 годин тому
Separation of concerns already exists, not everybody's a full stack developer. I don't see why the programming paradigms would stay static when given the ability to have specialist agents perform the tasks for far less money.
@br3nto
@br3nto 4 дні тому
2:16 The UI is really just a projection/transformation of data in flight and the backend is really just a transformation of data in flight to persisted data. Both well established repeatable patterns. The hardest part therefore is modelling the data in flight. The transformations to persisted data store or UI is just repeating the same patterns.
@carolgagne9346
@carolgagne9346 4 дні тому
The business model, from my point of view, is simple and perpetual: sell books, certifications, keynote speaker, increase the version number and do it again
@Raaampage
@Raaampage 4 дні тому
The site demo is down MediaWiki internal error. Original exception: [44233e5025c4c886d03f329d] / Wikimedia\Rdbms\DBConnectionError: Cannot access the database: php_network_getaddresses: getaddrinfo failed: Name or service not known (database) (database) Backtrace:
@Raaampage
@Raaampage 4 дні тому
"Phyton" 😱😱
@johntrevithick5900
@johntrevithick5900 4 дні тому
...the tedious ones.
@benc9765
@benc9765 4 дні тому
Dave is obviously a backend engineer
@gammalgris2497
@gammalgris2497 4 дні тому
I guess we need a broad understanding of how the algorithms in AI work, especially their advantages and disadvantages. A black box like ChatGTP is a lost battle regarding testing and verification. You can make ever bigger models, tweak bias and stuff and you can't be sure where the black box breaks. And still you don't have the time and resources to test and verify the black box. I'd be happy if the tools wouldn't cause me unnecessary work. It's yet not comparable to delegating certain task. There are other more reliable algorithms for simple work flows. For me that translates into reading about the math and theory. My employer expects reliable software rather than a scientific paper/ experiment.
@M43782
@M43782 4 дні тому
Scrum is a 0% interest rate phenomenon.
@NicodemusT
@NicodemusT 4 дні тому
AI don't write unit tests.
@lodrnr
@lodrnr 4 дні тому
Must have learned that from me!
@NicodemusT
@NicodemusT 4 дні тому
@@lodrnr ha ha
@danilomenoli
@danilomenoli 3 дні тому
They can write if they are told to write.
@omniphage9391
@omniphage9391 3 дні тому
@@danilomenoli thats a recipe for reward hacking
@HoD999x
@HoD999x 2 дні тому
neither do humans....
@el_arte
@el_arte 4 дні тому
Wrong way to look at it. AI does a good job when it was trained with a lot of examples. So, right now, AI does a good job with PHP, Ruby, Python, Javascript. If you expect good results with Scala, Julia or other obscure or less used languages, you’ll fail. Java also has a lot of code out there too, but maybe not as much in the open. But it’ll be a while before AI can deliver more than functions which you clearly describe.
@MiguelVilaG
@MiguelVilaG 4 дні тому
I use Copilot with Scala, and it works how I'd expect it to: autocompletions that follow the codebase style or in a functional style. I think LLMs will still be able to generalize no matter the source language.
@el_arte
@el_arte 4 дні тому
@@MiguelVilaG Maybe Smalltalk training datasets help in that case.
@bobbycrosby9765
@bobbycrosby9765 4 дні тому
I use Copilot with both Java and Clojure. The Clojure is output is definitely worse - usually not as I would want it. Even for Java, it tends to have bad variable names, so I have to go through and rename things.
@el_arte
@el_arte 4 дні тому
@@bobbycrosby9765 We get used to magic really fast. Soon we will say: Why do I have to push that button to generate my project?
@michelmagix
@michelmagix 4 дні тому
Data Analysts, Transportation and Logistics, Manufacturing,Customer Service, Healthcare Diagnostics, Financial Services, Legal and Compliance
@banatibor83
@banatibor83 4 дні тому
Basically very constrained systems, and searching in a database and doing pattern matching. Constrained: data analysis, transportation, manufacturing, financial services Search: customer service, healthcare diagnostics, legal and compliance
@gaiustacitus4242
@gaiustacitus4242 3 дні тому
Very simple legal documents can be generated using AI, but not any contract I would use in a business I own or manage. For example, a typical non-disclosure agreement is two to three pages. The non-disclosure agreement I wrote is seven pages plus a signature page. I write contracts with the same level of detail, solid logic, and "error handling" to retain most of the agreement if any one provision is held to be unenforceable. My NDA was submitted for review by one of the top law firms in the region and I received only the comment that the agreement would not be binding without a quid pro quo, a fact I was already aware of. People who accept mediocrity in all things are placing too much confidence in generative AI. The current state of AI limits it from replacing all but menial tasks like customer service. As for manufacturing, that industry has been using increasingly sophisticated robotic controls for decades to automate discrete processes and to maximize the efficient use of material, equipment, and labor to ensure the right products are made at the right time to meet customer demand. Likewise, the transportation and logistics industry has also been using algorithms which optimize efficiency based on cost and schedule constraints. Generic AI models are not going to be more efficient than dedicated software applications which have been refined over the past 40+ years. There are legal issues yet to be considered about using AI to replace healthcare diagnostics, financial services, and legal/compliance. No underwriter is going to issue professional liability coverage for even the most advanced AI currently available, and no business would put it to any of these uses without such insurance.
@benmaxinm
@benmaxinm 4 дні тому
Clearly TDD is very misunderstood because of the "T" in the branding name. If it would be called for example PCR (Plan Code Review) or something sexier, people would maybe at least pause/stop and try to understand it.
@davidg81815
@davidg81815 4 дні тому
This sounds like the programmer has to be perfect in writing their tests. I don't know any perfect people. If you have incorrect knowledge of the application, and you write a bad test ,then commit/push to master. What do you have? A bug. I guess the option would be to get feedback from everyone that you're writing the correct test before you write it. I also wonder how this strategy works for junior developers. The idea that there is a infinite list of appropriately sized stories ready for every developer is a joke.
@SEOng-gs7lj
@SEOng-gs7lj 5 днів тому
what are your views on event sourcing? perhaps a video? since if it is good, it should lead to better software faster, otherwise tell us where it is weak or bad thanks