Author: theslowdiyer

Quote of the Day (17)

”Some people strive to be punctual. Some people strive to be worth waiting for”

I know which one I’d rather be 😀

Advertisements

Scaled agile (what?)

As mentioned in a previous post, I am now part of an organisation that is attempting to convert much of its IT development to an “agile” approach, more specifically by adopting the “Scaled Agile Framework” (SAFe®). Although I am not directly involved in this work I still see and hear enough to notice some patterns emerging.

The origins of agile are in development, i.e. where you basically turn requirements into code. This is more or less the same all over, but when you start scaling agile to cover bigger “organisations”, then suddenly there are a couple of different flavours to be considered.

If you develop software as a product to sell to others, either by putting it on the shelf or using a “made-to-order” approach, the objective is either to put the most appealing product on the shelf or most effectively and efficiently meeting the customer requirements. Once the product is done, that effectively concludes the exercise and your agile approach is therefore really “scaled agile product development”.

However, if you are an organisation that develops or implements software for internal use using agile, then you are effectively doing “scaled agile system implementation”, which is a slightly different flavor to the development one. It’s different because you are now responsible as an organisation for realising the business benefits of the solution and not just completing the product (in essence, outcome vs. output responsibility), and that means that new parameters/considerations start to creep in.

  • How do we ensure usability of the solution over the entire life cycle?
  • How do we cover compliance requirements, not just for the base product but also whatever is required for usage and maintenance of the solution over the entire life cycle?
  • How do we cover user stories to deal specifically with upgrades and maintenance activities in our organisation? (and, just as importantly, how do we get those user stories prioritised during development?)
  • And last but not least (actually, probably the opposite!) – how do we include organisation change management activities and user acceptance as part of the agile implementation cycle.

That is of course not to say that these points are not important when you develop software for sale, but they form a bigger part of the total cost/total value equation when the scope is a full implementation and usage cycle – and it’s becoming clear to me that much of the complexity in scaling agile are not in the development, but in the parts that follow being agile at scale…

Career planning….

…for people that don’t (necessarily) care about careers!

As someone who has spent his entire working life since I left university doing things that are very hard to describe without saying “it’s sort of a combination of….”, I’ve always struggled a bit with “career planning” and the inevitable development discussions with my various managers – and they’ve probably struggled just as much with the conversations as I have.

The truth is that the roles I find interesting and challenging very often don’t exist but have be made by taking a role that does exist and then adding my own twist to it. Which means that discussions about developments are extremely difficult because there will almost always be elements of any standard role that I either don’t much care for or aren’t particularly qualified to do. I’ve never really thought of my work as a “career” but more as a series of assignment chosen based on what is available, what is interesting and what is challenging at any given moment – i.e. the “logical next step”. There was no “plan” to start out with and there still (mostly) isn’t now 10-12 years later.

Fortunately then to help alleviate this situation one day a few years ago inspiration struck and I came up with a model that I think better suits me and hopefully also many others with similar profiles. It’s basically just a two-axis grid which shows function or job type on one axis and level/type of involvement on the other. I used:

as my dimensions, but it could just as well be:

or something else that suits your context.

You then plot roughly where you see yourself being right now and then where you want to go. Not exactly rocket science, but very useful as a primer for discussions, because:

This model takes something that is absolute (what do you want to do?) and makes it relative (what do you want to do more of/less of compared to now?) and that makes a big difference because suddenly you can think of individual tasks or assignments rather than positions. It’s also a big difference for “the other side” (i.e. the manager) as you can spend less time discussing positions that may or may not exist let alone be available to fill, but instead give you as a manager an insight into what your team wants to do. This you can then use to “shuffle the deck” in the best possible way and try to give people tasks that they find challenging and inspiring, which in turn should improve development, satisfaction and retention all round.

It’s probably not a silver bullet for all situations, but use it wisely and it should make some of those hard discussions a bit easier. Feel free to leave a comment if you try it 😀

Being SAFe…

New job, new abstraction level (EA) and among the first points on the agenda is an introduction to the Scaled Agile Framework (SAFe®)

As someone who has previously only had a high-level conceptual exposure to agile – but has always found the thinking behind quite appealing – actually seeing it work (or at least being implemented) is a bit of a revelation.

What strikes me is that it is very stringent, and at the same time very sensitive to context. At first glance SAFe® itself looks like a very detailed playbook, but go a little deeper and I don’t really see a prescriptive way of doing things here anyway. The (excellent) glossary of explanations is peppered with “context-based” words like “fit for purpose”, “excessive”, “combination of” and so on. So really, the success of the framework in a given organisation relates very closely to the ability of the organisation to interpret these variables and find the right level of application for the specific context. Now those previous struggles I’ve seen with making agile work in practice suddenly make a lot more sense…

Another thing that makes an impression is the extreme reliance on dedicated resources to ensure minimal delays in execution and communication. The culture clash with the world I came from where everyone is constantly booked on five projects at the same time is plainly obvious… and the changes required to the corporate culture in order to execute a successful pivot to agile methods in a single department (let alone a full enterprise transformation) are staggering!

Looking forward to seeing where this is going – it’s going to be both fun and highly educational I’m sure 🙂

The art (and science) of non-linear trade-offs….

I’m currently in the process of a job-change after some big changes in my current company. This has caused me to think quite a bit about which part of a job that actually makes it interesting for me. To make a long story a bit shorter, I have come to the conclusion that most of the stuff that is really fun and where I feel I add value to my organistion involves what I have come to call “non-linear trade-offs”**. On talking to my current colleagues I get the impression I am not the only one who feels this way and so I thought the topic worth of a blog post here – it’s been a while since inspiration struck anyway so I can’t afford to be too picky 😀

The linear trade-offs in an organisation, e.g. whether to add more money, resources etc. to something are mostly the domain of Line-of-Business leaders and honestly not very interesting for an architect. They are also inherently linear (or at least approaching linearity within certain boundaries) – if you add more people to a department it will have a higher capacity but also a higher cost and so on. These tradeoffs therefore depend mostly on the available means and capabilities of the organisation, the risk appetite of senior management and also on the commitments to stakeholders outside the organisation which is normally within the remit of the leadership to work on anyway.

Non-linear tradeoffs on the other hand are the tradeoffs where a little change of one side of the tradeoff makes a big difference on the other side. This may be positive or negative, but it is actually surprisingly often the negative part that isn’t well-understood, i.e. that you sometimes can invest nearly all the time/money and still only achieve a fraction of the value. The non-linear tradeoff is therefore the obvious realm of the architect and other like-minded professionals that are able (and willing!) to see through the “illusion” of what a problem initially appears to be and through to the real root causes (or the real obstacles) that need to be addressed.

An example of a non-linear tradeoff that I have seen in practice quite recently involves management reporting. If you want a management report on your sales that shows a certain number of data points that has a cost to develop and run – so far so obvious. Now, you might be happy with a report that covers 50% of the information points, even if it comes at 80% of the cost, because there might be other benefits in terms of time-to-value etc of going with a limited solution and then building on it later. However, only an idiot would pay for a management report that covers all the data points but is only 50% accurate, so whether that report is 50% or 20% of the cost of the “real” solution is immaterial – it’s still worthless!. That is an example of a non-linear tradeoff that is negative – if you are not prepared to invest in what is required to get close to 100% accuracy, then you might as well not bother starting the project at all (or abandon it if it is already running).

Another recent example for me is a fairly long-winded discussion on which tool to use to manage and improve some underperforming data management processes. The push from the business is to invest in a dedicated governance solution because that’s what you normally do. The pushback from yours truly and some key colleagues is that it is not necessary though. On paper this is simple – you go for the “proper” option. However, as most of the problems with the process in question actually consists of lack of clarity of what happens in the process, lack of definitions of what information needs to be collected, lack of clear and enforced roles & responsibilities etc. it would be possible to achieve a substantial portion of the value by putting the process into any tool – because that will simply not be possible without addressing most of the basic business shortcomings first. Taking this approach would also cut implementation time as you will use an existing capability, reduce overall investments and not add to the capability footprint of the organisation etc.

Good old Pareto raises a hand over in the corner and that’s obviously correct, but not all non-linear tradeoffs are close to an 80/20-split. That’s also fine, the important part is recognising that they are non-linear and so there can be large benefits in spending time on finding an optimum here 🙂

Now where I am going it this? Well, as mentioned in the beginning it has actually helped me explain to myself (and a few others) what I enjoy doing – and why I don’t really care about some decisions but will spend a long time on others. I suspect it might also be helpful in my future endeavours as I prepare to (officially) enter the world of EA in a few weeks time 🙂

**I came up with this term recently, but it could be that it already exists and I’ve just seen it somewhere without realising it, so I’m not planning to trademark it – and please don’t shoot me if you’ve seen it somewhere else 🙂

The cost of capabilities…

Do you understand the true cost of your capabilities?

The topic of this post first appeared in the comments section of a post by Gene Hughson and Gene latched on to my thoughts and posted a great follow-up of his own here. As the topic reappeared in some discussions at work over the last few weeks, I figured now was the time to write it up as a separate post here also 🙂

Imagine you buy a car and each month you pay off the loan/lease and the cost of running the car in full. When after 10 years of running the car and paying off every penny of buying and keeping it, it finally dies, you would think that you would be in the clear, right? You could now make a decision on whether to buy a new car based on the current cost of buying and maintaining a new car just as you did the first time. Well, if you are like most people you will have built you life around having the car and so you don’t really have a choice but to replace the old car when it’s finally dead – that’s the hidden cost of capabilities.

That cost of sustaining your capabilities also very much applies to IT-systems (you couldn’t really live without that CRM-system now, could you? 😀 ) but it is something that most organisations seem to overlook – and don’t think about until it is too late. It goes both for the “core” enterprise systems (ERP, CRM, SCM, PLM etc.) where it is probably mostly the cost of upgrades and patching rather than the cost of replacements that aren’t properly factored in, but still.

Where I would imagine it applies even more is when you delve into the more fast-paced layer of customer/consumer-facing applications in mobile etc. If you want to offer your customers a mobile application you have to consider the cost of updating it as new OS’es and new hardware comes along. Eventually, some of your underlying platform technologies may also die, but you still want to have the mobile app and so the cost of porting/converting/rebuilding it on to another technology stack comes on top.

This (hopefully) isn’t exactly rocket surgery, but as mentioned it does seemed to be overlooked quite often and it is also hard to predict what the real future costs are.

So, what to do about it? Well, if you can’t predict it, you have to find ways to minimise the impact when it does hit and so your two best friends are now architecture and strategy. Architecture to ensure that what is built will continue to be fit-for-purpose for as long as possible, and strategy to ensure that the new capabilities you add will be capabilities you need, and not just capabilities you want.

This way, there should be strategic support for continuing to invest in these capabilities and there should equally be an understanding (for management) that increasing your capability footprint will inevitably lead to an increase in your baseline cost.

The rule of 2…

(or: which part of the creative process are you?)

I’ve found that when I start something new (new process, new functionality etc.) then it often tends to follow the same pattern, which means it takes:

  • 2 seconds to get the initial idea – that flash of inspiration that tells you that something could be done better/smarter etc.
  • 2 minutes to think though the concept and the implications and convince yourself that it is indeed a good idea.
  • 2 hours to write up a case for the change with arguments and benefits, a quick impact assessment etc.

But then it takes:

  • between 2 days and 2 weeks to convince those around you that it is indeed worth the hassle of making a change and upsetting the status quo.
  • between 2 months and 2 years to actually approve and implement the changes and see the original idea come to fruition and deliver value to the business.

And so you might ask, what can I use this for? Well, perhaps to think a little about where your strengths are and where you want to be in this cycle? Are you the person who gets the initial idea and sticks with it for just long enough to explain it to someone else, or are you the person that gets a kick out of following something for a long time to finally see it deliver value?

I guess there is no right or wrong answer, but for me personally at least it has made me understand a little better how long I prefer to stick with an idea before handing it over to someone else – and of course what projects I should try and avoid getting assigned to 🙂

On architects and doctors….

I want to make it clear that I have tremendous respect for doctors and I don’t think that a comparison is entirely justified – after all the decisions I’m called upon to make on a daily basis are hardly “life-and-death”. Also, rather than anything remotely dangerous or traumatising, typical occupational risks for someone like me are likely to be the mental anguish of too many meetings without a clear purpose and the physical impact of 8-12 hours per day in an office chair…

I will however maintain that this works as a comparison, a) because of quite a few similarities in patterns, and b) because the context in which the doctor operates is much more clear-cut than the world of SW architecture, making it more easily understandable to “normal” people in a business (whatever that means… :D) .

Examples of where I find this analogy useful as a means of communicating what I do to those around me:

– 1) Listening to requirements is to me somewhat akin to listening to a patient describing symptoms and carries the same inherent risk of jumping to conclusions about what the problem really is. The doctor has to cut through the patient’s own ideas of what the problem is, the patient’s preferred solutions to said problem and any “false flags” because people simply don’t necessarily realise what is important to tell the doctor.

– 2) Like the doctor, the architect also has to balance short-term inconvenience/discomfort with long-term benefits for the patient. That means sometimes causing a patient to go through very painful procedures because they will give the best end result. Good doctors recognise that while some decisions clearly should be made by the patient, some decisions should be made by an expert that has the full picture and a more objective position on what the right decision is. I doubt that anyone would leave all medical decisions to the patient, but some people seem very prepared to insist that all IT decisions are made by the business (or exclusively by IT, which IMHO is equally wrong).

– 3) The doctor has to keep the good of the patient front and center, but the doctor must also be prepared to make uncomfortable decisions for the good of the patient (cf. point 2), remain objective while doing so and then be able to stay (relatively) calm and composed when someone afterwards starts to second-guess the decision they made.

– 4) Last, but not least: Patients normally come to doctors because doctors are experts and they are prepared to accept an expert opinion but the doctor understands the responsibility of making decisions based on the relatively limited input that a patient can provide. I have seen some curious practices for instance regarding sign-off of requirements and solutions – which, if you transplant them to a doctor/patient context and it becomes clear that they do not make sense at all. Some of them, when transplanted into this alternative context, would effectively mean that you had to be a doctor yourself in order to get any value out of seeing a doctor…

 

…or am I completely off here?