版本管理 I've inherited 200K lines of spaghetti code—what now?

liuxue.gu@hotmail.com · 2012年08月09日 · 13 次阅读


kmote asks: I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2—think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it has incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of nonexistent configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice it to say, there is not a consensus of opinion about what is needed for the path ahead.

They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin.

Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck."

So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)? Related: When is code "legacy"?

Answer: The (very) long answer (237 Votes)

haylem replies:


This is a daunting task indeed, and there's a lot of ground to cover. So I'm humbly suggesting this as somewhat comprehensive guide for your team, with pointers to appropriate tools and educational material. Remember: These are guidelines, and as such are meant to be adopted, adapted, or dropped based on circumstances. Beware: Dumping all this on a team at once would most likely fail. You should try to cherry-pick elements that would give you the best bang-for-sweat, and introduce them slowly, one at a time. Note: Not all of this applies directly to Visual Programming Systems like G2. For more specific details on how to deal with these, see the Addendum section at the end.

Executive summary for the impatient

Define a rigid project structure, with: project templates coding conventions familiar build systems and sets of usage guidelines for your infrastructure and tools. Install a good SCM and make sure they know how to use it. Point them to good IDEs for their technology, and make sure they know how to use them. Implement code quality checkers and automatic reporting in the build system. Couple the build system to continuous integration and continuous inspection systems. With the help of the above, identify code quality "hotspots" and refactor. Now for the long version... Caution, brace yourselves!

Rigidity is (often) good

This is a controversial opinion, as rigidity is often seen as a force working against you. It's true for some phases of some projects. But once you see it as a structural support, a framework that takes away the guesswork, it greatly reduces the amount of wasted time and effort. Make it work for you, not against you. Rigidity = Process / Procedure. Software development needs good processes and procedures for exactly the same reasons that chemical plants or factories have manuals, procedures, drills, and emergency guidelines: preventing bad outcomes, increasing predictability, maximizing productivity... Rigidity comes in moderation, though!!

Rigidity of the project structure If each project comes with its own structure, you (and newcomers) are lost and need to pick up from scratch every time you open them. You don't want this in a professional software shop, and you don't want this in a lab either.

Rigidity of the build systems If each project looks different, there's a good chance they also build differently. A build shouldn't require too much research or too much guesswork. You want to be able to do the canonical thing and not need to worry about specifics: configure; make install, ant, mvn install, etc. Re-using the same build system and making it evolve over the time also ensures a consistent level of quality. You do need a quick READMEto point to the project's specifics, and gracefully guide any user/developer/researcher. This also greatly facilitates other parts of your build infrastructure, namely:

Continuous integration Continuous inspection So keep your build (like your projects) up to date, but make it stricter over time, and more efficient at reporting violations and bad practices. Do not reinvent the wheel, and reuse what you have already done. Recommended Reading:

Continuous Integration: Improving Software Quality and Reducing Risk (Duval, Matyas, Glover, 2007) Continuous Delivery: Release Software Releases through Build, Test and Deployment Automation (Humble, Farley, 2010) Rigidity in the choice of programming languages You can't expect, especially in a research environment, to have all teams (and even less all developers) use the same language and technology stack. However, you can identify a set of "officially supported" tools and encourage their use. The rest, without a good rationale, shouldn't be permitted (beyond prototyping). Keep your tech stack simple, and the maintenance and breadth of required skills to a bare minimum: a strong core.

Rigidity of the coding conventions and guidelines Coding conventions and guidelines are what allow you to develop both an identity as a team, and a shared lingo. You don't want to err into terra incognitaevery time you open a source file. Nonsensical rules that make life harder or forbid actions explicitly to the extent that commits are refused based on single simple violations are a burden. However:

a well thought-out ground ruleset takes away a lot of the whining and thinking: nobody should break under any circumstances a set of recommended rules provide additional guidance Personal Approach: I am aggressive when it comes to coding conventions, because I do believe in having a lingua franca for my team. When crap code gets checked-in, it stands out like a cold sore on the face of a Hollywood star: it triggers a review and an action automatically. In fact, I've sometimes gone as far as to advocate the use of pre-commit hooks to reject non-conforming commits. As mentioned, it shouldn't be overly crazy and get in the way of productivity: it should drive it. Introduce these slowly, especially at the beginning. But it's way preferable over spending so much time fixing faulty code that you can't work on real issues. Some languages even enforce this by design:

Java was meant to reduce the amount of dull crap you can write with it (though no doubt many manage to do it). Python's block structure by indentation is another idea in this sense. Go, with its gofmt tool, which completely takes away any debate and effort (and ego!!) inherent to style: run gofmt before you commit. Make sure that code rot cannot slip through. Code conventions, continuous integration and continuous inspection, pair programming, and code reviews are your arsenal against this demon. Plus, as you'll see below, code is documentation, and that's another area where conventions encourage readability and clarity.

Rigidity of the documentation Documentation goes hand in hand with code. Code itself is documentation. But there must be clear-cut instructions on how to build, use, and maintain things. Using a single point of control for documentation (like a WikiWiki or DMS) is a good thing. Create spaces for projects, spaces for more random banter and experimentation. Have all spaces reuse common rules and conventions. Try to make it part of the team spirit. Most of the advice applying to code and tooling also applies to documentation.

Rigidity in code comments Code comments, as mentioned above, are also documentation. Developers like to express their feelings about their code (mostly pride and frustration, if you ask me). So it's not unusual for them to express these in no uncertain terms in comments (or even code), when a more formal piece of text could have conveyed the same meaning with less expletives or drama. It's OK to let a few slip through for fun and historical reasons: it's also part of developing a team culture. But it's very important that everybody knows what is acceptable and what isn't, and that comment noise is just that: noise.

Rigidity in commit logs Commit logs are not an annoying and useless "step" of your SCM's lifecycle: you DON'T skip it to get home on time or get on with the next task, or to catch up with the buddies who left for lunch. They matter, and, like (most) good wine, the more time passes, the more valuable they become. So DO them right. I'm flabbergasted when I see co-workers writing one-liners for giant commits, or for non-obvious hacks. Commits are done for a reason, and that reason ISN'T always clearly expressed by your code and the one line of commit log you entered. There's more to it than that. Each line of code has a story and a history.The diffs can tell its history, but you have to write its story.

Why did I update this line? -> Because the interface changed. Why did the interface change? -> Because the library L1 defining it was updated. Why was the library updated? -> Because library L2, that we need for feature F, depended on library L1. And what's feature X? -> See task 3456 in issue tracker. It's not my SCM choice, and may not be the best one for your lab either; but Git gets this right, and tries to force you to write good logs more than most other SCMs systems, by using short logs and long logs. Link the task ID (yes, you need one) and a leave a generic summary for the shortlog, and expand in the long log: write the changeset's story. It is a log:It's here to keep track and record updates.

Rule of thumb: If you were searching for something about this change later, is your log likely to answer your question? Projects, documentation, and code are alive. Keep them in sync, otherwise they do not form that symbiotic entity anymore. It works wonders when you have:

Clear commits logs in your SCM, with links to task IDs in your issue tracker Where this tracker's tickets themselves link to the changesets in your SCM (and possibly to the builds in your CI system) A documentation system that links to all of these Code and documentation need to be cohesive.

Rigidity in testing

Rules of thumb:

Any new code shall come with (at least) unit tests. Any refactored legacy code shall come with unit tests. Of course, these need:

to actually test something valuable (or they are a waste of time and energy) to be well written and commented (just like any other code you check in) They are documentation as well, and they help to outline the contract of your code. Especially if you use TDD. Even if you don't, you need them for your peace of mind. They are your safety net when you incorporate new code (maintenance or feature) and your watchtower to guard against code rot and environmental failures. Of course, you should go further and have integration tests, and regression tests for each reproducible bug you fix.

Rigidity in the use of the tools It's OK for the occasional developer/scientist to want to try some new static checker on the source, generate a graph or model using another, or implement a new module using a DSL. But it's best if there's a canonical set of tools that allteam members are expected to know and use. Beyond that, let members use what they want, as long as they are ALL:

Productive NOT regularly requiring assistance NOT regularly adjusting to your general infrastructure In areas like code, build system, or documentation NOT affecting others' work ABLE to timely perform any task requested If that's not the case, then enforce that they fallback to defaults.

Rigidity vs. versatility, adaptability, prototyping, and emergencies Flexibility can be good. Letting someone occasionally use a hack, a quick-n-dirty approach, or a favorite pet tool to get the job done is fine. Never let it become a habit, and neverlet this code become the actual codebase to support.

Team spirit matters

Develop a sense of pride in your codebase

Develop a sense of Pride in Code Use wallboards Leaderboard for a continuous integration game Wallboards for issue management and defect counting Use an issue tracker / bug tracker Avoid blame games

DO use Continuous Integration / Continuous Inspection games: it fosters good-mannered and productive competition. DO keep track defects: it's just good house-keeping. DO identify root causes: it's just future-proofing processes. BUT DO NOT assign blame: it's counter productive. It's about the code, not about the developers Make developers conscious of the quality of their code, But make them see the code as a detached entity and not an extension of themselves, which cannot be criticized. It's a paradox: you need to encourage ego-less programmingfor a healthy workplace but to rely on ego for motivational purposes.

From scientist to programmer

People who do not value and take pride in code do not produce good code. For this property to emerge, they need to discover how valuable and fun it can be. Sheer professionalism and desire to do good is not enough: it needs passion. So you need to turn your scientists into programmers(in the large sense). Someone argued in comments that after 10 to 20 years on a project and its code, anyone would feel attachment. Maybe I'm wrong, but I assume they're proud of the code's outcomes and of the work and its legacy, not of the code itself or of the act of writing it. From experience, most researchers regard coding as a necessity, or at best as a fun distraction. They just want it to work. The ones who are already pretty versed in it and who have an interest in programming are a lot easier to persuade to adopt best practices and switch technologies. You need to get them halfway there.

Code maintenance is part of research work

Nobody reads crappy research papers. That's why they are peer-reviewed, proofread, refined, rewritten, and approved time and time again until deemed ready for publication. The same applies to a thesis and a codebase! Make it clear that constant refactoring and refreshing of a codebase prevents code rot and reduces technical debt, and facilitates future re-use and adaptation of the work for other projects.

Why all this??!

Why do we bother with all of the above? For code quality. Or is it quality code...? These guidelines aim at driving your team toward this goal. Some of these points help by simply showing your team the way and letting them do it (which is much better) and others take them by the hand (but that's how you educate people and develop habits). How do you know when the goal is within reach?

Quality is measurable Not always quantitatively, but it is measurable. As mentioned, you need to develop a sense of pride in your team, and showing progress and good results is key. Measure code quality regularly and show progress between intervals, and how it matters. Do retrospectives to reflect on what has been done, and how it made things better or worse. There are great tools for continuous inspection. Sonar being a popular one in the Java world, but it can adapt to any technologies; and there are many others. Keep your code under the microscope and look for these pesky annoying bugs and microbes.

But what if my code is already crap?

All of the above is fun and cute like a trip to Never Land, but it's not that easy to do when you already have (a pile of steamy and smelly) crap code, and a team reluctant to change.

Here's the secret: you need to start somewhere.

Personal anecdote: In a project, we worked with a codebase weighing originally 650,000+ Java LOC, 200,000+ lines of JSPs, 40,000+ JavaScript LOC, and 400+ MB of binary dependencies. After about 18 months, it's 500,000 Java LOC (MOSTLY CLEAN); 150,000 lines of JSPs; and 38,000 JavaScript LOC; with dependencies down to barely 100MB (and these are not in our SCM anymore!). How did we do it? We just did all of the above. Or tried hard.It's a team effort, but we slowly inject in our process regulations and tools to monitor the heart-rate of our product, while hastily slashing away the "fat": crap code, useless dependencies. We didn't stop all development to do this: we have occasional periods of relative peace and quiet where we are free to go crazy on the codebase and tear it apart, but most of the time we do it all by defaulting to a "review and refactor" mode every chance we get: during builds, during lunch, during bug fixing sprints, during Friday afternoons. There were some big "works"—switching our build system from a giant Ant build of 8500+ XML LOC to a multi-module Maven build was one of them. We then had:

clear-cut modules (or at least it was already a lot better, and we still have big plans for the future) automatic dependency management (for easy maintenance and updates, and to remove useless deps) faster, easier, and reproduceable builds daily reports on quality Another was the injection of "utility tool-belts," even though we were trying to reduce dependencies: Google Guava and Apache Commons slim down your code and reduce surface for bugs in your code a lot. We also persuaded our IT department that maybe using our new tools (JIRA, Fisheye, Crucible, Confluence, Jenkins) was better than using the ones in place. We still needed to deal with some we despised (QC, Sharepoint, and SupportWorks...), but it was an overall improved experience, with some more room left. And every day, there's now a trickle of between one to dozens of commits that deal only with fixing and refactoring things. We occasionally break stuff (you need unit tests, and you better write them before you refactor stuff away), but overall the benefit for our morale AND for the product has been enormous. We get there one fraction of a code quality percentage at a time. And it's fun to see it increase!!! Note: Again, rigidity needs to be shaken to make room for new and better things. In my anecdote, our IT department is partly right in trying to impose some things on us, and wrong for others. Or maybe they used to be right. Things change. Prove that there are better ways to boost your productivity. Trial-runs and prototypes are here for this.

The super-secret incremental spaghetti code refactoring cycle for awesome quality

+-----------------+ +-----------------+ | A N A L Y Z E +----->| I D E N T I F Y | +-----------------+ +---------+-------+ ^ | | v +--------+--------+ +-----------------+ | C L E A N +<-----| F I X | +-----------------+ +-----------------+ Once you have some quality tools at your toolbelt:

Analyze your code with code quality checkers. Linters, static analyzers, or what have you. Identify your critical hotspots and low hanging fruits. Violations have severity levels, and large classes with a large number of high-severity ones are a big red flag: as such, they appear as "hot spots" on radiator/heatmap types of views. Fix the hotspots first. It maximizes your impact in as short a timeframe as they have the highest business value. Ideally, critical violations should be dealt with as soon as they appear, as they are potential security vulnerabilities or crash causes, and present a high risk of inducing a liability (and in your case, bad performance for the lab). Clean the low-level violations with automated codebase sweeps. It improves the signal-to-noise ratio so you are be able to see significant violations on your radar as they appear. There's often a large army of minor violations at first if they were never taken care of and your codebase was left loose in the wild. They do not present a real "risk," but they impair the code's readability and maintainability. Fix them either as you meet them while working on a task, or by large cleaning quests with automated code sweeps if possible. Do be careful with large auto-sweeps if you don't have a good test suite and integration system. Make sure to agree with co-workers on the right time to run them to minimize the annoyance. Repeat until you are satisfied. Which, ideally, you should never be, if this is still an active product: it will keep evolving. Quick tips for good house-keeping

When in hotfix-mode, based on a customer support request: It's usually a best practice to NOT go around fixing other issues, as you might introduce new ones unwillingly. Go at it SEAL-style: get in, kill the bug, get out, and ship your patch. It's a surgical and tactical strike. But for all other cases, if you open a file, make it your duty to: Definitely: review it (take notes, file issue reports) Maybe: clean it (style cleanups and minor violations) Ideally: refactor it (reorganize large sections and their neighbors) Just don't get sidetracked into spending a week from file to file and ending up with a massive changeset of thousands of fixes spanning multiple features and modules—it makes future tracking difficult. One issue in code should be one ticket in your tracker. Sometimes a changeset can impact multiple tickets; but if it happens too often, then you're probably doing something wrong.

Addendum: managing visual programming environments

The walled gardens of bespoke programming systems

Multiple programming systems, like the OP's G2, are different beasts.

No Source "Code": Often they do not give you access to a textual representation of your source "code"—it might be stored in a proprietary binary format, or maybe it does store things in text format but hides them away from you. Bespoke graphical programming systems are actually not uncommon in research labs, as they simplify the automation of repetitive data processing workflows. No Tooling: Aside from their own, that is. You are often constrained by their programming environment, their own debugger, their own interpreter, their own documentation tools and formats. They are walled gardens, except if they eventually capture the interest of someone motivated enough to reverse engineer their formats and builds external tools—if the license permits it. Lack of Documentation: Quite often, these are niche programming systems, which are used in fairly closed environments. People who use them frequently sign NDAs and never speak about what they do. Programming communities for them are rare. So resources are scarce. You're stuck with your official reference, and that's it. The ironic (and often frustrating) bit is that all the things these systems do could obviously be achieved by using mainstream and general purpose programming languages, and quite probably more efficiently. But it requires a deeper knowledge of programming, whereas you can't expect your biologist, chemist, or physicist (to name a few) to know enough about programming, and even less to have the time (and desire) to implement and maintain complex systems, that may or may not be long-lived. For the same reason we use DSLs, we have these bespoke programming systems.

Personal anecdote 2: Actually, I worked on one of these myself. I didn't do the link with the OP's request, but my the project was a set of inter-connected large pieces of data-processing and data-storage software (primarily for bio-informatics research, healthcare, and cosmetics, but also for business intelligence, or any domain implying the tracking of large volumes of research data of any kind and the preparation of data-processing workflows and ETLs). One of these applications was, quite simply, a visual IDE that used the usual bells and whistles: drag and drop interfaces, versioned project workspaces (using text and XML files for metadata storage), lots of pluggable drivers to heterogeneous datasources, and a visual canvas to design pipelines to process data from N datasources and in the end generate M transformed outputs, and possible shiny visualizations and complex (and interactive) online reports. Your typical bespoke visual programming system, suffering from a bit of NIH syndrome under the pretense of designing a system adapted to the users' needs. And, as you would expect, it's a nice system, quite flexible for its needs, though sometimes a bit over-the-top so that you wonder "why not use command-line tools instead?" and unfortunately always leading in medium-sized teams working on large projects to a lot of different people using it with different "best" practices. Great, we're doomed!—what do we do about it?

Well, in the end, all of the above still holds. If you cannot extract most of the programming from this system to use more mainstream tools and languages, you "just" need to adapt it to the constraints of your system.

About versioning and storage In the end, you can almost always version things, even with the most constrained and walled environment. More often than not, these systems still come with their own versioning (which is unfortunately often rather basic, and just offers to revert to previous versions without much visibility, just keeping previous snapshots). It's not exactly using differential changesets like your SCM of choice might, and it's probably not suited for multiple users submitting changes simultaneously. But still, if they do provide such a functionality, maybe your solution is to follow our beloved industry-standard guidelines above and to transpose them to this programming system!! If the storage system is a database, it probably exposes export functionalities, or can be backed-up at the file-system level. If it's using a custom binary format, maybe you can simply try to version it with a VCS that has good support for binary data. You won't have fine-grained control, but at least you'll have your back sort of covered against catastrophes and have a certain degree of disaster recovery compliance.

About testing Implement your tests within the platform itself, and use external tools and background jobs to set up regular backups. Quite probably, you fire up these tests the same way that you would fire up the programs developed with this programming system. Sure, it's a hack job and definitely not up to the standard of what is common for "normal" programming, but the idea is to adapt to the system while trying to maintain a semblance of professional software development process.

The road is long and steep... As always with niche environments and bespoke programming systems, and as we exposed above, you deal with strange formats, only a limited (or totally nonexistent) set of possibly clunky tools, and a void in place of a community.

The recommendation: Try to implement the above guidelines outside of your bespoke programming system, as much as possible. This ensures that you can rely on "common" tools, which have proper support and community drive.

The workaround: When this is not an option, try to retrofit this global framework into your "box." The idea is to overlay this blueprint of industry standard best practices on top of your programming system, and make the best of it. The advice still applies: define structure and best practices, encourage conformance. Unfortunately, this implies that you may need to dive in and do a tremendous amount of leg-work. So... Famous last words, and humble requests:

Document everything you do. Share your experience. Open Source any tool you write. By doing all of this, you will:

increase your chances of getting support from people in similar situations. provide help to other people, and foster discussion around your technology stack. Who knows, you could be at the very beginning of a new vibrant community of Obscure Language X. If there are none, start one!

Ask questions on StackOverflow.com Maybe even write a proposal for a new StackExchange Site in the Area 51 Maybe it's beautiful inside, but nobody has a clue so far, so help take down this ugly wall and let others have a peek!

Answer: More specific to your case... (7 Votes)

Rob Z replies: After looking into Gensym G2 for a bit it looks like the way to approach this problem is going to be highly dependent upon how much of the code base looks like this:

or this:

versus this, courtesy of 99 Bottles of Beer: beer-bottles()

i:integer =99; j:integer; constant:integer =-1;

begin for i=99 down to 1 do j = (i+constant); if (i=1) then begin post"[i] bottle of beer on the wall"; post" [i] bottle of beer"; post" Take one down and pass it around "; post" No bottle of beer on the wall"; end else begin post"[i] bottles of beer on the wall"; post" [i] bottles of beer"; post" Take one down and pass it around "; if (i=2) then post" [j] bottle of beer on the wall" else post" [j] bottles of beer on the wall"; end end end In the case of the latter you are working with source code which is effectively a known quantity and some of the other answers offer some very sage advice for dealing with it.

If most of the code base is the former, or even if a sizable chunk is, you are going to be running into the interesting problem of having code that likely cannot be refactored due to being extremely specialized, or worse yet, something that looks like it may be removable, but unless it is properly documented, you don't know if you are removing critical code (think something along the lines of a scram operation) that doesn't appear to be so at first glance. Although obviously your first priority is going to be getting some sort of version control online, as pointed out by ElYusubov, and it does appear that version control has been supported since version 8.3. Since G2 is a combination of a couple different language methodologies, you would likely find it to be most effective to use the version control that is provided with it as opposed to trying to find something else and getting it to work.

Next, although some would likely advocate for starting to refactor, I'm a strong advocate of making sure you fully understand the system you are working with before you start touching any of the code, especially when dealing with code and visual diagrams that were created by developers without formal training (or background) in software engineering methodologies. The reasoning for this is several fold, but the most obvious reason is that you are working with an application that potentially has over 100 person-years worth of work put into it and you really need to make sure you know what it is doing and how much documentation there is in it. As you didn't say which industry the system is deployed to, based upon what I have been reading about G2 it appears it is safe to assume that it is likely a mission-critical application that may even hold a potential for also having life safety implications. Thus, understanding exactly what it is doing is going to be very important. If there is code that is not documented, work with the others on the team to make sure that documentation is put into place to make sure people can determine what the code does.

Next start wrapping unit tests around as much of the code base and visual diagrams as you can. I must admit to some ignorance with regard to how to do this with G2 but it might almost be worth creating your own testing framework to get this in place. This is also an ideal time to start introducing the other members of the team to get them use to some of the more rigorous engineering practices involved with code quality (i.e. all code must have unit tests and documentation).

Once you have unit tests in place on a fair amount of the code, you can start approaching refactoring on manner such as suggested by haylem; however, remember to keep in mind that you are dealing with something that is meant for developing expert systems, and refactoring it might be an uphill battle. This is actually an environment where there is something to be said for not writing extremely generic code at times.

Finally, make sure you pay close attention to what the other team members say—just because the code and diagram quality is not the best doesn't necessarily reflect poorly upon them. Ultimately, for the time being they are likely to know more about what the application does than you, which is why it is all the more important for you to sit down and make sure you understand what it does before making sweeping changes as well.

Think you know how best to deal with massive amounts of spaghetti code? Disagree with the opinions expressed above? Downvote or upvote an answer, or leave your own answer at the original post at Stack Exchange, a network of 80+ sites where you can trade expert knowledge on topics like Web apps, cycling, scientific skepticism, and (almost) everything in between.

需要 登录 后方可回复。