Transcript
Joy Clark: Hello and welcome to a new conversation about software engineering. This is Joy Clark. Today on the Case Podcast we will talk about legacy software and immutable architecture. My guest today is Chad Fowler, who is a CTO at Microsoft. Welcome to the show, Chad.
Chad Fowler: Hello, thanks!
Joy Clark: We want to talk about legacy software today. Is this something that interests you?
Chad Fowler: It is, yes. I've been talking about this topic for a few years now. I'm guessing it may not be the most exciting way to start the podcast, but I promise it will be more interesting than what most people think of when they think of legacy software. The idea here is not so much working on old, yucky software, but how we create software that can live a long, healthy life and actually create a legacy in software.
Joy Clark: What's some way that you can do that?
Chad Fowler: Well, maybe I should talk a bit about what legacy means, and how I came to this area of interest. I came into the software industry via music. In fact, "via" is probably not even the right way to say it, but I was a musician and I was trained as a musician, and I got interested in software. As a musician, the word "legacy" means something really nice. You look at famous musicians from history - they have left a legacy of their work. Beethoven has left a legacy, and we all know what Beethoven is. John Coltrane in jazz, and it's true in art and fashion... Pretty much every industry except for software, when we use the word "legacy", we're using it to describe something good.
Chad Fowler: I got into software, and that word "legacy" that always meant something nice suddenly means something bad in the software world, and I started thinking about why that is... It's because old systems are bad system, usually. And not only that, old systems are bad systems that die. That strikes me as a pretty sad concept, because most of us, as software developers, we spend our lives building things that are going to pretty soon die and have to be thrown away, have to be disposed of.
Chad Fowler: When they are thrown away, other software developers are going to come around and call them "legacy systems" as a pejorative, and they're going to hate those systems. So it's just this weird, sad cycle of useless, thrown away work, and in effect, useless, thrown away parts of software developers' lives.
Joy Clark: How long does it take for a piece of software to be considered legacy software usually?
Chad Fowler: It depends. The idealist software engineer version would be as soon you write code, it's legacy code, which is sort of true... Because as soon as you write code, it's something you have to maintain, something you have to work around and consider as you develop.
Chad Fowler: In my experience just very unscientifically, it seems like after five years or so, a lot of business systems end up being referred to as the negative version of legacy software, and plans start coming under way to replace them.
Joy Clark: So is it a given that my application will eventually need to be replaced?
Chad Fowler: I hope not. Well, no, it is not a given. It is statistically likely, I would say, but it is not a given. I don't know. I would guess that eventually, over tens of years, there may be no chance that this hardware that it ran on before could work, so maybe the manifestation of your application will have to be replaced. But the way that I think of legacy software and really the metaphor that I've created for myself for bringing software into the future -- it gets this negative name "legacy" because it's hard to change, and if something is hard to change, eventually you have to just replace it, because you're going to need to change the function of it somehow, because business requirements change, or you find bugs, or things get slow and you need to speed them up... All sorts of reasons. If the system itself can't change, you have to replace the system.
Chad Fowler: I believe that software developers in general develop a sort of fetish for code, which may sound like a ridiculous thing to say because we're software developers, of course we're into code... But when I say that, I mean that they develop a fixation on the actual text that you type into the editor and the way that's organized, and they do that sometimes at the expense of thinking about and creating a decent system. It may sound like I'm going fully off-topic now from your question, but it is not a given that your system will have to be replaced; it is probably a given though that large portions or maybe all of your code will eventually need to be replaced. But in my opinion that is okay, as long as you've created a system that can move forward with the changing needs of the environment around it. It's okay to replace code itself; in fact, it's beneficial to do that.
Joy Clark: As a software developer, should we have a change in our thought...? It could be difficult to change the focus, because we're developing software, so that's all we...
Chad Fowler: It's like your value is typing stuff into the editor, that's what you're saying, right? And if that's how you measure yourself, then you're going to feel kind of weird if you're throwing away this stuff you type into the editor regularly.
Joy Clark: Exactly. Especially if you spend, like you said, five years developing this. You've poured out your soul into your software for five years, and then someone's like "We're going to throw it away." I can imagine that would be difficult.
Chad Fowler: Yes, definitely. So that's the key... If someone has to come along and throw away the whole thing at once, that feels pretty bad. It also feels bad for the person that has to do it. I've spent a lot of my career doing that... But if instead you're throwing away tiny bits at a time, a function here or there, or a small service, or a class here or there, and replacing it with a new one, it feels like an upgrade to the system, and in fact it is an upgrade to the system, because you're throwing away a small component.
Chad Fowler: I didn't actually finish -- I mentioned the word metaphor, but I didn't say what the metaphor was... The metaphor that I use, that inspires my thinking around all of this is biology, and I'm pretty clueless when it comes to biology, I have to admit, because I'm a music major and naturally uninterested in science... I guess because I'm from Arkansas, I don't know. But in biology, we've all heard this concept about cellular regeneration, and we use it to say like the two of us as adults are not actually the same physical material that we were when we were children, somehow though we are the same humans, and the system of the body continues, but cells are dying and being reborn, billions a second. It's a pretty remarkable thing.
Chad Fowler: If you apply that sort of idea to software systems, you come up with some pretty interesting properties, one of which is if you want something to not become static and stuck and then eventually die, it has to be constantly changing and evolving. But then you think "Okay, if you're going to force destruction on the components of a system, you can't do that and not be designing the system to work well without a flow of work, because if you do, it would just be too disruptive", and that's why we end up with this pejorative of legacy systems.
Chad Fowler: Imagine going to a legacy system and saying "You have to replace 10% of this system every week while you're working." It would not work. You would need sometimes six months to replace 10% of the system.
Joy Clark: So should we try to create code that can be reused? Should we try to create libraries that we can use in other projects, or is that approach not a good idea?
Chad Fowler: I don't think we should try, no. Part of me wants to say it depends, but I realize that's the answer to every question in software development... But no, I think as a rule, you should not try to create reusable code. That said, it might be possible to do that. Really, what you should do is you should try to create a system that is inherently changeable. Our approaches to creating reusable code are sometimes an attempt to do that.
Chad Fowler: Maybe you've been on a project where you say, "Oh, this little library could be pulled out and be open-sourced", and developers love that, it's fun. But then you write a bunch of documentation and you publish it on GitHub, or whatever... Part of the benefit though that a lot of us have in mind and maybe never even really vocalize is when you open source something, you create a very clean interface to it. That means that you're creating pretty strict boundaries to the thing. Then when you have strict boundaries and clear interfaces, especially when you've committed to the open source world that these are going to be the boundaries and the APIs, it sort of locks you into this situation where you feel like you can't change the API anymore.
Chad Fowler: That sounds like stasis, which maybe immediately feels weird, like you've gotten yourself stuck, but there is a possible benefit to it, which is now the API is unchanging, and API's are one way that you communicate a system, which means the code can change behind the APIs. So in fact, if you create reusable libraries in that kind of a way, if you do it right, you might be creating a system ultimately that's easier to change and can live longer. In my experience though, most developers don't do that, especially when they follow the siren song of reuse and they just start creating reusable libraries throughout their existing internal enterprises.
Chad Fowler: The downside and what is more likely to happen is the libraries are created in such a way that they create additional coupling instead of removing coupling. If you think about architectural issues that influence whether you can change your system, coupling is probably the most important one. Coupling with reusable libraries can be that maybe the APIs aren't strict enough and they expose too much, so that you can reach in and see things that should be encapsulated from the outside, so that you're stuck with legacy implementations, old implementations of code, of libraries... Or it might even be things like versioning... Like, we use this library across all of our services and our frontend applications, and whenever we need to change the library because of the way it works, we have to deploy it to all of those services at the same time. That's never a good thing, because that results in you not changing it ever, because you're afraid to.
Chad Fowler: I'm afraid that if we try to create reusable libraries, we are more likely going to create problems for ourselves and get ourselves stuck than we are going to solve any problems.
Joy Clark: But for some things - the example in my head is for authentication protocols and that sort of thing, where I might not be an expert at OpenID Connect, so I use a library for that... If I were to do that myself, then it probably wouldn't be a good idea.
Chad Fowler: Right, I agree with that. But I also think there are different ways to look at that. When I was CTO of Wunderlist, we built this pretty crazy backend system with way too many microservices - not really way too many, but that's what people would think, for implementing a to-do app... And we made a rule for ourselves on the backend that there would be no reusable libraries. The question immediately becomes, "Well, how do you do logging and how do you do authentication, and that sort of stuff?" What we ended up doing was for a lot of the things that you would normally create a reusable library for, we would create reusable services for, and the services would call those services. That sounds kind of silly when you're talking about logging, because it's really high volume, so you might think "This is not going to perform well", and the fact is of course each call does not perform well compared to an in-memory implementation of a logging library, but there are ways around that.
Chad Fowler: For a lot of these sorts of services that were really high volume and needed to perform extremely well, we created them so that we could deploy them on the same physical hardware as every other service. So whenever a service needed to do logging, it would just log to a local host via a very simple protocol that there's no need to create a reusable library to implement, and then the service would do all the right stuff on local hosts underneath it, then shuttle it off somewhere else.
Chad Fowler: A couple of comments about that - one is that the boundary about when you can create reusable services versus libraries is further into the service territory than most people would guess, because they have a lot of preconceived notions about performance and stability that maybe aren't correct. The other is when we create reusable services like that, it becomes much easier to change them, because for one, you can change them at once by just deploying a service, and then everything that uses the service is now updated, you don't have to go deploy every dependent application and service.
Chad Fowler: The other, and probably the more important one, when you create separate services that talk to each other via, for example, TCP or UDP, tight coupling is harder to do than loose coupling. So the path of least resistance is just call the API as intended, and it's just going to do its thing and you're not going to have tight coupling. It would actually be harder to tightly couple and reach into the implementation of the service and require extra work from the developers. I believe in trying to set things up so that the lazy answer is also the right answer.
Joy Clark: If you have so many small services that talk to each other, how can you ensure that they don't become dependent on each other?
Chad Fowler: I don't know if I have a good answer to how to ensure they can't be dependent on each other, but I can tell you that architecturally the way I think of these sorts of service architectures where you have hundreds of small services is that there is usually one call flow from top to bottom, and if services end up having to split off and call each other sort of side-to-side in the process and create any change for a circular dependency, then you're doing it wrong. That ends up resulting in a philosophy and then an implementation around how much should be duplicate and how much should be extracted in the services.
Chad Fowler: A stupid example - if you had some sort of specific string formatting you do, that many of your services have to implement, if you end up having to call out to that and it has to call out to something else, that might be worse than just duplicating the logic in your services. But I can say that without knowing exactly what the mental framework is for it, those are the services that I would create in terms of a dependency chain or graph don't cycle in any way; it just goes straight through, and maybe they call one thing off to the side here or there, but typically things sort of align by domain, so you don't have those sorts of complex dependencies.
Joy Clark: How can I test a system? You describe a system that's made up of hundreds of different services - how can I test a system like that?
Chad Fowler: It's very difficult to test a system like that. It's very easy though to test each component, so that's nice. It's also not hard to test the way a system like that talks to itself, the way the components talk to themselves. And one thing I haven't really said is if you're going to create a system that's sort of like radical microservice-y architecture like this, you must create a conventional or a standardized approach to how components talk to each other. There needs to be a framework for that, and I don't mean like a framework you download in a jar, but a framework in terms of a documented, agreed-upon way that services talk to each other.
Chad Fowler: That would include not just "Do I use HTTP or some other protocol?" and "Do I use JSON?" but also "How do I authenticate these requests?" and "What do I measure at every point along the way?", these sorts of things. It ends up being not very hard, once you've created a mental framework like that, to create small-ish acceptance tests that pull the framework together, and it's also very easy, trivial even, to test a microservice when it's truly a microservice.
Chad Fowler: Honestly, the way that I approach testing at any other level is measure aggressively in real time. When I say "measure", I mean the normal things like performance and memory usage, but also the main events that you care about. If you're creating a task management system and no one's creating any tasks in it, you know that there's a problem, for example. So measure aggressively in real time, and then create a system so that you can deploy in waves and you can see what the effects of these deployments are on your measurements. If you do that, then yes, you're sort of testing in production, but ultimately that's the only real way with normal system and the compromises that we have in the business world at least, to ensure that things are correct is to actually run them with real users in their real contexts in production. We did a lot of that and we would put in alerting so that we could see when the issues were coming up with the various metrics that we cared about, and then just carefully deploy in waves and roll back if there's an issue.
Joy Clark: So if I manage to create a system where I can replace the different parts of it, how can I ensure that the system itself isn't monolithic and difficult to replace?
Chad Fowler: Interesting. So the parts are easy to replace, but the system itself is difficult to replace, that's your fear...? Is that right?
Joy Clark: I think so. I mean, it's just the same thing. Eventually, the hardware will probably be out of date, so what happens when that happens?
Chad Fowler: My first answer to that is I would love for everyone to have that problem. Imagine if that's all we complained about - "Man, the hardware is obsolete. We have to rewrite the software system, because it won't run anymore." That's not the problem we're having these days usually, partially because it takes so long for hardware to go obsolete on most of the systems that we work on... But mostly because our systems are so brittle that they die long before any hardware would go out of date. That said, part of me says "I don't care. Let that be the problem." But another part of me knows that if you're able to change things all the time, if you're able to change all the components of something, then think about what components might be... Well, some of the components would be the things that talk to the hardware, the things that talk to databases, which are also sort of like a form of hardware abstraction layer. You could replace all those components over time, so you might be able to migrate a system from obsolete hardware platforms onto new hardware platforms more readily, and perhaps never have to throw away all the system if you do it in this sort of idealistic way, that everything is extremely decoupled, made up of tiny parts, and you're constantly changing it all the time.
Chad Fowler: When I say "all the time", I mean you're really throwing away code in a radical sense every time you change -- imagine every time you have to enhance a piece of code or there's a bug, you have to throw away the component, whatever it is, and start over. That would be the level of granularity in a probably too radical sense. If you're doing that kind of replacement of components, then I would bet that it's likely that you can evolve an entire system from platform to platform, from database to database, in internet protocols changing, anything. It's probably decoupled enough.
Joy Clark: You've talked a lot about this application you built for Wunderlist - how long ago did you deploy that for the first time?
Chad Fowler: When I went to Wunderlist it was the beginning of 2013, and we had an old architecture that didn't really work that well at the time. We had just released Wunderlist 2, and it was a very popular application. The service and the backend was down for like 48 hours straight when it go launched, because the backend couldn't deal with it. But it was shortly after that that along with a company called Dynport based out of Hamburg we implemented the first versions of immutable infrastructure ideas, even with our old, failing architecture, and then over the course of a year and a half or so we brought the old system back to life enough to keep us going for a while, and then completely re-architected and rewrote everything and deployed it in the middle of 2014.
Chad Fowler: It was at that point that we had this radically distributed, very heterogeneous system that we created, with an architecture that maps to these ideas around the biological metaphor.
Joy Clark: Is the system still running?
Chad Fowler: It is, yes. It's kind of funny, we were acquired by Microsoft a year later, and in the scope of that work we'd been touching a lot of other things instead of the core Wunderlist infrastructure since then. For long periods of time we have not had to do any maintenance on the backend, and therefore we haven't changed the backend, which is kind of interesting. So from one perspective I was really proud -- in fact, when we went through the technical due diligence for the acquisition, which is a process that typically happens when a tech company buys another tech company for their technology, one of the things that we had to talk about and answer with the assessors/auditors is "If you were to leave this thing sitting and just running, how long would it be before it died?", which I thought was a really fascinating question.
Chad Fowler: My answer was "I don't know, of course, but probably a long time", because it sort of takes care of itself, self-heals; things restart if they're not going well, that sort of thing. It turned out to be true, the system ran for a long time. Then there was a piece of database maintenance that we didn't do that resulted in an outage, and we didn't do it because we just weren't looking at the system; we were focusing on some new things that we were building. When that happened though, it resulted in some changes we had to make to work around that problem that was created. I know I'm skipping the details, but imagine the database is locked up and you need to do something heroic to get around it. We found that because we had not changed the system in quite some time, it was actually very hard to change, and it's not just because we didn't know how anymore - though that was actually part of it, honestly; if you don't touch something all the time, you forget how it works... But stuff like software dependencies external to the system for the build to work stopped working in some cases, and it really just underscored to me the need to constantly be changing something. If you want to be able to change something, you'd have to change it all the time.
Chad Fowler: I hope to not make that mistake again. Even when the system is working really well, you need to keep it moving. Kind of like if you're a healthy person and you just lay in bed all day for weeks on end - it's probably gonna make you unhealthy.
Joy Clark: Did you then fix it and now it's running again?
Chad Fowler: It is, yes. It was painful, and now we're changing it a little more often as well, but I hope to never have to do that again. That was a bad couple of days.
Joy Clark: You mentioned the term "immutable infrastructure" - can you tell me what that is?
Chad Fowler: Yes, so to motivate the term a little bit, going back in time I used to be a system administrator full-time when that was a more normal thing for people to be... And I remember being really proud once when I did uptime on a UNIX box that I was managing, and it was up for well over a year. We had started it up, installed everything on it and then ended up deploying new applications, and it never went down, never had to be rebooted. I thought, "Well, this is really cool. This thing's running so long, and working." But I also remember feeling like, "Oh, man... What happens if it goes down now? Will it actually restart okay? I don't know, I haven't tested that. What if it completely dies? How will I rebuild it? What have I done to this thing? I've logged into this so many times... It's scary."
Chad Fowler: Most of us were feeling that way in the industry that we were doing system administration, and naturally a class of tools came along that solved some of these problems. Not the restarting part - that one's less scary than "How do I make sure the system is in a state that I understand and can be reproduced?'" So tools like Chef and Puppet came along to solve that problem. But even then, you still have these systems that last for a long time - typically they're big systems - and if you need to add more of them, it's sort of an expensive, slow, clunky process. So I started thinking, "Why not take this biological metaphor and apply it to systems, as well?" Because when you start up a new system, you know exactly what's on it for sure, because you were the one that did it, you just now did it, you haven't forgotten. Typically, when you start up a new system from scratch, it's going to work, whereas when it runs for a long time, it might not; maybe it has memory leaks, maybe it gets hacked... Who knows? So the state of a system that is somewhat correct, and I say that meaning not formally correct, but somewhat correct, a normal business system might degrade over time, even when you're not touching it.
Chad Fowler: Why not just always have new systems? If a new system that you've just started has all these great qualities of definitely working, known state etc., why not just optimize for that? And that's what immutable infrastructure is about - it's about always having new systems, and whenever you need to change something, rather than change an existing one, you throw away the old system and you replace it with a new one, which in the '90s when I was doing uptime on my servers would not have been something that was tenable, because you would literally have to throw away the server and restart, or have some extra ones and go install the operating system on it from scratch; it would take a long time.
Chad Fowler: But what if you never install an operating system patch on a system, and therefore you don't need to worry about what happens when you do that without rebooting, because that actually worked with the existing running software, and you need to restart the services etc.? What if you never actually upgrade your application on the existing system, so you don't need to worry about that, you don't need to think so much about those restart scripts and the way they play together, but instead, whenever I need to deploy my app, I just deploy whole new servers and replace the old ones with the new ones?
Chad Fowler: So you have this fresh approach, and like I said, it also gives you the same concept as cellular regeneration again. The system as a whole is a collection of running machines, and on those machines running processes, but I can throw them away and replace them as often as I want. And in fact, the more often I do it (within some limits), the better the system is going to run, and the more I can trust it.
Chad Fowler: A lot of people will talk about these sorts of system - the words "immutable infrastructure" have been unfortunately mis-attributed to Docker and the likes... But it's like when Ruby on Rails came out; I always thought that name was silly, because it's really Rails on Ruby, it's backward, and it's the same with Docker and immutable infrastructure. Docker is an implementation of immutable infrastructure ideas along with other stuff, but it isn't immutable infrastructure itself; it benefits from that concept. So you could even do immutable infrastructure literally by hiring a whole bunch of people. Say you hire 1,000 people to stand a data center, and you press buttons and you have them reinstall operating systems when they get a message; that could be immutable infrastructure too... It would just be slower and more expensive.
Joy Clark: In your Wunderlist application, what were the servers that you were throwing away? Did you use Docker?
Chad Fowler: We didn't. We created our system before Docker was really well known, and probably before most people were using it in production at all. I heard about it after we did the first releases of the system. What we were doing was we were on AWS and we were creating new EC2 instances, and we had a whole system for creating machine images - they call them AMIs on AWS - so that we could, for every new change, create a machine image for that change. It had all of the operating system and application code, and then start and stop as many of them as we wanted flexibly, or elastically, as I like to say.
Joy Clark: Nowadays, a lot of programs are being deployed into the cloud, and one thing that I find interesting is that the cloud itself is really a long-running server. There's the example of the S3 instances at Amazon, the outage we had a couple weeks ago was because it was based on some services that were provided by Amazon that had not been restarted for three years or so. Would something like immutable infrastructure work for a cloud company, or is it at that point not possible?
Chad Fowler: Yes, it would definitely work for a cloud company, and I'm pretty sure, to be fair to Amazon, that they were some of the pioneers of these sorts of concepts. They didn't call it "immutable infrastructure", but I remember hearing 12-13 years ago from Amazon about how they were creating these tiny services, and teams had to maintain themselves and they could replace them at any time, as long as they adhered to some rules. It's probably not quite right to say that the cloud is a long-running server; it's more of a long-running system the way we were talking about it before, and of course, it's multiple long-running systems that make up one abstract concept of cloud. It's probably similar to what happened at Wunderlist, except at a much larger, more devastating scale.
Chad Fowler: I bet the people on AWS who were working on S3 - they knew this thing was up for a long time and I bet that they were afraid of it, and that's probably why it wasn't restarted... If this was the case. I don't know what they were doing, and I also didn't see the post-mortem on it.
Chad Fowler: I do believe that if you start thinking of systems as being immutable and then you add on the concept of cellular regeneration for system health biologically, I believe you're going to end up with a good system if you do that, and a safer system. Had they been doing that - way easier said than done - this specific error probably wouldn't have happened, if it truly was about a system that hadn't been restarted in a long time. I can't say that it would solve every problem they would have, and it would maybe introduce others, but it strikes me as an instance of what you try to avoid by doing immutable infrastructure correctly.
Joy Clark: To be fair to Amazon, they had a service in place that could restore the whole service... It just took longer that they thought, because they hadn't started it for about a day -- I mean, sorry, they hadn't started it for three years or so, but then once the server crashed, it restored itself, it just took about 24 hours, while everyone was freaking out and didn't know what was happening.
Chad Fowler: That's a miracle really, if you think about it. Something at the scale of S3 to only take 24 hours to recover from something like that... Someone did a really good job.
Joy Clark: I can link the report in the show notes... I thought it was a really interesting case. One of the things -- when I hear about Docker, I personally always cringe a little bit because it sounds very buzzword-y, and I'm always curious too about how much of a risk it is to base your entire system on a technology like Docker.
Chad Fowler: I don't think it's a huge risk if you think about it right. If you think Docker is immutable infrastructure and then you implement Docker... If you're one of those people who just equates them, then you probably are not holistically thinking about the system and your tech choices to keep you safe. But if Docker is just an implementation of a pattern that you want to take advantage of, then I don't think it's that risky... Because you're likely going to separate yourself and decouple yourself and your system from that specific implementation detail to some extent.
Chad Fowler: We used to talk about whether we should go to Docker at Wunderlist, and I always said "Yes, we should, it's just not the highest priority right now." But I see Docker for us in that context as absolutely a cost and deployment time optimization layer. That's it. Because everything else about the way we did things would stay exactly the same, but instead of booting up an Amazon AMI, we would have a set of running systems and we would kill and start Docker containers, so they would happen faster and more would happen on the same machines, with better protections around them than you'd normally get. Otherwise, our system is still our system, and Docker is just a tool.
Chad Fowler: If you're implementing technology that way where you have a point of view, and the tools you use fit or don't fit into that point of view, you're going to be in a better situation than if you let the tool dictate your point of view and you sort of follow its philosophy blindly.
Joy Clark: But as a software developer, I don't have that much control over the system as a whole. Is there anything I can do, are there concepts from this immutable infrastructure that I can apply to my software, or am I just at the mercy of the systems architect who's telling me what to do?
Chad Fowler: Well, I hope you're not, because that wouldn't be a fun way to work. So you mean you're an individual developer and maybe there's some sort of technical guru on your team who's making all the calls about how things are structured? Yes, there still is. Basically, my goal when I talk about all this stuff is to create software that's easier to change, more fun to work on, and doesn't have to be fully replaced at any point in the future, unless you want to, unless the business dictates it. But there should never be a technical reason you have to throw it away.
Chad Fowler: If you think about those goals and then layer on the ideas around immutability... In order to make something immutable - and let's talk about code instead of running servers for a moment - the crazy idea I had, where if you need to change something you have to throw away the unit of code that it's in and rewrite it, if you were to embed this idea in your head, don't do it necessarily because it is radical, but think about things that way... How would it make you change how you structure the code you write on a daily basis? Well, I think it would cause you to be very careful about coupling. What language would you be working in? Java, or something like that?
Joy Clark: Yes, among others...
Chad Fowler: Let's say you're working in Java then, because a lot of people do, and you have to write a new method in a class... I would shy away from relying on instance variables, if I could help it. This is going to sound stupid and way too specific, but when you use instance variables in the class, you are coupling yourself to those instance variables, what their values are, and perhaps temporal coupling of what they might have been set to outside the context of the class, so you're less likely to be thread-safe even. And thread safety as an issue is often about coupling, or maybe always about coupling, depending on how you define it. So rather than working outside the scope of the method I'm writing, I would try to keep everything in that scope, and I would try to keep that scope as small and exclusive as possible, therefore reducing coupling.
Chad Fowler: When you think about it, if your Java method was essentially a pure function - and a pure function is one where there are no side effects, or given a set of inputs, you're always going to get the same outputs... Those are very, very easy to change; they're even mathematically provable. So if you try to always write your code as close as possible to pure functions, it will change how you think and it will make every piece of code that you write more changeable. And it's not just limited to instance variables, it could be some version of globals, like singletons - when you use that pattern, you're essentially creating a global variable. Any reference to a class or method outside of the current class or method you're in, obviously, is more coupling. Classes that are small are easier to replace than classes that are large... So it's all about limiting scope and limiting size of things that might need to be changed.
Chad Fowler: Now, imagine that you're the only developer - and this would be a really stupid setup, and I'm sorry if this is the case; I shouldn't say things like this - and then you have one person who's just the architect... So that person defines how things work and you implement all the code. If the architect was an idiot and had no concept of how to create a good system that could survive for a long time, that you followed rules like I was just talking about all the time in your code as the only developer, I believe you might actually even accidentally create a system that has the properties I was talking about through your individual actions day to day.
Joy Clark: So we should just do functional programming all the time?
Chad Fowler: Well, I didn't want to say that... No, you can create bad coupling in functional programming, too. It's more than -- I hope that we all understand the benefits when there are benefits of functional programming and try to apply them everywhere. Don't try to do functional programming everywhere necessarily. If something can be a pure function, it's going to be better if it's a pure function. And if you can reduce coupling and reduce scope and never change the value of something, so you don't have variables but you just have immutable values... That's another thing I should have mentioned.
Joy Clark: That's like music for my ears.
Chad Fowler: Yes, you don't have to be doing functional programming to do that. You can do lovely Java, and even idiomatic Java style, but I'll be thinking about immutability and small scopes and all these sorts of things.
Joy Clark: I'm the person who always makes everything public final, and doesn't let anyone add setters to my classes, so it sounds good to me.
Chad Fowler: Yes, I would like to work on your project.
Joy Clark: Yes, we try to keep everything as immutable as possible. I read your book, and you mentioned that you had a mentor who told you three things that you should learn.
Chad Fowler: Yes, his name was Ken.
Joy Clark: If you could give three things that somebody who's just getting started out as a software developer, could you list three things that you think they should learn?
Chad Fowler: It's a little different now, and it would depend on what sort of developer. At one level I would say "You should really learn how to do programming on some sort of mobile device, like iOS and Android, and you should learn backend programming, and then you should get really good at one programming language, ideally a functional programming language." That's what I would say.
Chad Fowler: But that's like, "Hey, I wanna be a software developer. What should I do?", which is usually not the level of specificity that people come with. If you want to be a backend developer, I might tell you something else. I might say "Get really deep into one of these ecosystems, like the Hashicorp stuff - all the different things around that, or the Docker ecosystem, everything around that as one, then get really good at a functional programming language, something like Haskell. I would go hardcore, and very different from everything else... And then get really good at something like Javascript, maybe." It really depends.
Chad Fowler: Usually, what I will try to do is carve things up into three tracks that don't overlap very much. The two programming languages sort of do, but I mentioned Javascript because of its ubiquity, coupled with the fact that it is ultra-dynamic, and then Haskell is ultra-functional, static, pure, and will make you think very differently than Javascript. That's the concept of having three tracks that teach you very independent things that bolster each other. But I don't know if I could just generically come up with a list of three things for any new software developer.
Joy Clark: Okay. Well, thank you so much for the time you took to answer all my questions.
Chad Fowler: And thank you, it was fun.
Joy Clark: Yes, I had fun, too. Thank you also to all the listeners. Until next time.