Business

Tax the data

iStock_000008663948XSmall-2017-10-6-13-19.jpg

If data is the new oil then why don’t we tax it?

My data is worth something to Google, and Facebook, and Twitter, and Amazon… and just about every other Internet behemoth. But alone my data is worth a really tiny tiny amount.

I’d like to charge Google and co. for having my data. The amount they currently pay me – free search, free email, cheap telephone… – doesn’t really add up. In fact, what Google pays me doesn’t pay my mortgage but somehow Larry Page and Sergey Brin are very very rich. Even if I did invoice Google, and even if Google paid we are talking pennies, at most.

But Google don’t just have my data, they have yours, your Mums, our friends, neighbours and just about everyone else. Put it all together and it is worth more than the sum of the parts.

Value of my data to Google < 1p
Value of your data to Google < 1p
Value our combined data to Google > 2p

The whole is worth more than the sum of the parts.

At the same time Google – and Facebook, Amazon, Apple, etc. – don’t like paying taxes. They like the things those taxes pay for (educated employees, law and order, transport networks, legal systems – particularly the bit of the legal system that deals with patents and intellectual property) but they don’t want to pay.

And when they do pay they find ways of minimising the payment and moving money around so it gets taxed as little as possible.

So why don’t we tax the data?

Governments, acting on behalf of their citizens should tax companies on the data they harvest from their users.

All those cookies that DoubleClick put on your machine.

All those profile likes that Facebook has.

Sure there is an issue of disentangling what is “my data” from what is “Google’s data” but I’m sure we could come up with a quota system, or Google could be allowed a tax deduction. Or they could simply delete the data when it gets old.

I’d be more than happy if Amazon deleted every piece of data they have about me. Apple seem to make even more money that Google and make me pay. While I might miss G-Drive I’d live (I pay DropBox anyway).

Or maybe we tax data-usage.

Maybe its the data users, the Cambridge Analyticas, of this world who should be paying the data tax. Pay for access, the ultimate firewall.

The tax would be levied for user within a geographic boundary. So these companies couldn’t claim the data was in low tax Ireland because the data generators (you and me) reside within state boundaries. If Facebook wants to have users in England then they would need to pay the British Government’s data-tax. If data that originates with English users is used by a company, no matter where they are, then Facebook needs to give the Government a cut.

This isn’t as radical as it sounds.

Governments have a long history of taxing resources – consider property taxed. Good taxes encourage consumers to limit their consumption – think cigarette taxes – and it may well be a good thing to limit some data usage. Anyway, thats not a hard and fast rule – the Government taxes my income and they don’t want to limit that.

And consider oil, after all, how often are we told that data is the new black gold?
– Countries with oil impose a tax (or charge) on oil companies which extract the oil.

Oil taxes demonstrate another thing about tax: Governments act on behalf of their citizens, like a class-action.

Consider Norway, every citizen of Norway could lay claim to part of the Norwegian oil reserves, they could individually invoice the oil companies for their share. But that wouldn’t work very well, too many people and again, the whole is worth more than the sum of the parts. So the Norwegian Government steps in, taxes the oil and then uses the revenue for the good of the citizens.

In a few places – like Alaska and the Shetlands – do see oil companies distributing money more directly.

After all, Governments could do with a bit more money and if they don’t tax data then the money is simply going to go to Zuckerberg, Page, Bezos and co. They wouldn’t miss a little bit.

And if this brings down other taxes, or helps fund a universal income, then people will have more time to spend online using these companies and buying things through them.

Read more? Subscribe to my newsletter – free updates on blog post, insights, events and offers.

Tax the data Read More »

MVP is a marketing exercise not a technology exercise

MVP-2017-10-2-11-03.jpg
… Minimally Viable Product

Possibly the most fashionable and misused term the digital industry right now. The term seems to be used by one-side-or-other to criticise the other.

I recently heard another Agile Coach say: “If you just add a few more features you’ll have an MVP” – I wanted to scream “Wrong, wrong, wrong!” But I bit my tongue (who says I’m can’t do diplomacy?)

MVP often seems to be the modern way of saying “The system must do”, MVP has become the M in Moscow rules.

Part of the problem is that the term means different things to different people. Originally coined to describe an experiment (“what is the smallest thing we could build to learn something about the market?”) it is almost always used to describe a small product that could satisfy the customers needs. But when push comes to shove that small usually isn’t very small. When MVP is used to mean “cut everything to the bone” the person saying it still seems to leave a lot of fat on the bone.

I think non-technical people hear the term MVP and think “A product which doesn’t do all that gold plating software engineering fat that slows the team down.” Such people show how little they actually understand about the digital world.

MVPs should not about technology. An MVP is not about building things.

An MVP is a marketing exercise: can we build something customers want?

Can we build something people will pay money for?

Before you use the language MVP you should assume:

  1. The technology can do it
  2. Your team can build it

The question is: is this thing worth building?and before we waste money building something nobody will use, let alone pay for, what can we build to make sure we are right?

The next time people start sketching an MVP divide it in 10. Assume the value is 10% of the stated value. Assume you have 10% of the resources and 10% of the time to build it. Now rethink what you are asking for. What can you build with a tenth?

Anyway, the cat is out of the bag, as much as I wish I could erase the abbreviation and name from collective memory I can’t. But maybe I can change the debate by differentiating between several types of MVP, that is, several different ways the term MVP is used:

  • MVP-M: a marketing product, designed to test what customers want, and what they will pay for.
  • MVP-T: a technical product designed to see if something can be build technologically – historically the terms proof-of-concept and prototype have been used here
  • MVP-L: a list of MUST HAVE features that a product MUST HAVE
  • MVP-H: a hippo MVP, a special MVP-L, that is highest paid person’s opinion of the feature list, unfortunately you might find several different people believe they have the right to set the feature list
  • MVP-X: where X is a number (1,2, 3…), this derivative is used by teams who are releasing enhancements to their product and growing it. (In the pre-digital age we called this a version number.) Exactly what is minimal about it I’m not sure but if it helps then why not?

MVP-M is the original meaning while MVP-L and MVP-H are the most common types.

So next time someone says “MVP” just check, what do they mean?

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

MVP is a marketing exercise not a technology exercise Read More »

Definition of Ready considered harmful

Small-iStock-502805668-2017-09-6-12-58.jpg

Earlier this week I was with a team and discussion turned to “the definition of ready.” This little idea has been growing more and more common in the last couple of years and while I like the concept I don’t recommend it. Indeed I think it could well reduce Agility.

To cut to the chase: “Definition of ready” reduces agility because it breaks up process flow, assumes greater role specific responsibilities, introduces more wait states (delay) and potentially undermines business-value based prioritisation.

The original idea builds on “definition of done”. Both definitions are a semi-formal checklists agreed by the team which are applied to pieces of work (stories, tasks, whatever). Before any piece of work is considered “done” it should satisfy the definition of done. So the team member who has done a piece of work should be able to mentally tick each item on the checklist. Typically a definition of done might contain:

 

  • Story implemented
  • Story satisfies acceptance criteria
  • Story has been seen and approved by the product owner
  • Code is passing all unit and acceptance tests

Note I say “mentally” and I call these lists “semi formal” because if you start having a physical checklist for each item, physically ticking the boxes, perhaps actually signing them, and presumably filing the lists or having someone audit them then the process is going to get very expensive quickly.

So far so good? – Now why don’t I like definition of ready?

On the one hand definition of ready is a good idea: before work begins on any story some pre-work has been done on the story to ensure it is “ready for development” – yes, typically this is about getting stories ready for coding. Such a check-list might say:

 

  • Story is written in User Story format with a named role
  • Acceptance criteria have been agreed with product owner
  • Developer, Tester and Product owner have agreed story meaning

Now on the other hand… even doing these means some work has been done. Once upon a time the story was not ready, someone, or some people have worked on the story to make it ready. When did this happen? Getting this story ready has already detracted from doing other work – work which was a higher priority because it was scheduled earlier.

Again, when did this happen?

If the story became “ready” yesterday then no big deal. The chances are that little has changed.

But if it became ready last week are you sure?

And what if it became ready last month? Or six months ago?

The longer it has been ready the greater the chance that something has changed. If we don’t check and re-validate the “ready” state then there is a risk something will have changed and be done wrong. If we do validate then we may well be repeating work which has already been done.

In general, the later the story becomes “ready” the better. Not only does it reduce the chance that something will change between becoming “ready” and work starting but it also minimises the chance that the story won’t be scheduled at all and all the pre-work was wasted.

More problematic still: what happens when the business priority is for a story that is not ready?

Customer: Well Dev team story X is the highest priority for the next sprint
Scrum Master: Sorry customer, Story X does not meet the definition of ready. Please choose another story.
Customer: But all the other stories are worth less than X so I’d really like X done!

The team could continue to refuse X – and sound like an old style trade unionist in the process – or they could accept X , make it ready and do it.

Herein lies my rule of thumb:

 

If a story is prioritised and scheduled for work but is not considered “ready” then the first task is to make it ready.

Indeed this can be generalised:

 

Once a story is prioritised and work starts then whatever needs doing gets done.

This simplifies the work of those making the priority calls. They now just look at the priority (i.e. business value) or work items. They don’t need to consider whether something is ready or not.

It also eliminates the problem of: when.

Teams which practise “definition of ready” usually expect their product owner to make stories ready before the iteration planning meeting, and that creates the problems above. Moving “make ready” inside the iteration, perhaps as a “3 Amigos” sessions after the planning meeting, eliminates this problem.

And before anyone complains saying “How can I estimate something thing that is not prepared?” let me point out you can. You are just estimating something different:

 

  • When you estimate “ready” stories you are estimating the time it takes to move a well formed story from analysis-complete to coding-complete
  • When up estimate an “unready” story you are estimating the time it takes to move a poorly formed story from its current state to coding-complete

I would expect the estimates to be bigger – because there is more work – and I would expect the estimates to be subject to more variability – because the initial state of the story is more variable. But is still quite doable, it is an estimate, not a promise.

I can see why teams adopt definition of ready and I might even recommend it myself but I’d hope it was an temporary measure on the way to something better.

In teams with broken, role based process flows then a definition of done for each stage can make sense. The definition of done at the end of one activity is the definition of ready for the next. For teams adopting Kanban style processes I would recommend this approach as part of process/board set-up. But I also hope that over time the board columns can be collapsed down and definitions dropped.

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

Definition of Ready considered harmful Read More »

What if it is all random?

iStock-512907832Small-2017-08-10-12-37.jpg

What if success in digital business, and in software development, is random? What if one cannot tell in advance what will succeed and what will fail?

My cynical side sometimes thinks everything is random. I don’t want to believe my cynical side but…

All those minimally viable products, some work, some fail.

All those stand-up meetings, do they really make a difference?

All those big requirements documents, just sometimes they work.

How can I even say this? – I’ve written books on how to “do it right.”
I advise companies on how to improve “processes.” I’ve helped individuals do better work.

And just last month I was at a patterns conference trying to spot reoccurring patterns and why they are patterns.

So let me pause my rational side and indulge my cynical side, what if it is all random?

If it is all random what we have to ask is: What would we do in a random world?

Imagine for a moment success is like making a bet at roulette and spinning the wheel.

Surely we would want to both minimise losses (small bets) and maximise wheel spins: try lots, remove the failures quickly and expand the successes (if we can).

I suggested “its all random” to someone the other day and he replied “It is not random, its complex.” And we were into Cynefin before you could say “spin the wheel.”

Dave Snowden’s Cynefin model attempts to help us understand complexity and the complex. Faced with complexity Cynefin says we should probe. That is, try lots of experiments so we can understand, learn from the experiments and adjust.

If the experiment “succeeds” we understand more and can grow that learning. Where the experiment “fails” we have still learned but we will try a different avenue next time.

Look! – it is the same approach, the same result, complexity, Cynefin or just random: try a lot, remove failure and build on success. And hang on, where have I heard that before, … Darwin and evolution; random gene mutations which give benefit get propagated and in time others die out.

It is just possible that Dave is right, Darwin is right and I am right…

Today most of the world’s mobile/cell telephone systems are built on CDMA technology. CDMA is super complex maths but it basically works by encoding a signal (sequence of numbers, your voice digitised) and injecting it into a random number stream (radio spectrum), provided you know the encoding you can retrieve the signal out of the randomness. Quite amazing really.

Further, provided the number sequences are sufficiently different they are in effect random so you can inject more signal into the same space.

That is why we can all use our mobile phones at the same time.

Put it another way: you walk into a party in London, in the room are Poles, Lebanese, Germans, Argentinians and the odd Brit. They are all talking in their own language to fellow speakers. Somehow you can hear your own language and join the right conversation. Everything else is random background noise.

Maybe the same is true in digital business and software development…

Perhaps it is all complex but it is so complex that we will never be able to follow all the cause and effect chains, it is so complex that it looks random. Dave is right with Cynefin but maybe there is so much complexity that we might as well treat it as random and save our time.

Back to CDMA and London parties, faced with apparent randomness there are useful strategies and signals can still be extracted.

Perhaps the way to deal with this complexity is not to try and understand it but to treat it as random. Rather than expend energy and time on a (possibly) impossible task accept it as random and apply appropriate strategies.

After all, if we have learned anything from statistical distributions it is that faced with actual and apparent randomness we can still find patterns, we can still learn and we can still work with, well, randomness.

What if it is all random? Read More »

Programmer’s Rorschach test

The picture above, I recently added this picture to Continuous Digital for a discussion of teams. When you look at it what do you see:

An old style structure chart, or an organization chart?

It could be either and anyone who knows of Conway’s Law shouldn’t be surprised.

When I was taught Modula-2 at college these sort of structure charts were considered superior to the older flow charts. This is functional decomposition, take a problem, break it down to smaller parts and implement them.

And that is the same idea behind traditional hierarchical organizational structure. An executive heads a division, he has a number of managers under him who manage work, each one of these manage several people who actually do the work (or perhaps manage more manager who manage the people who do the work!)

Most organizations are still set up this way. It is probably unsurprising that 50 years ago computer programmers copied this model when designing their systems – Conway’s Law, the system is a copy of the organization.

Fast forward to today, we use object oriented languages and design but most of our organizations are still constrained by hierarchical structure, that creates a conflict. The company is structurally decomposed but our code is object oriented.

The result is conflict and in many cases the organization wins – who hasn’t seen an object oriented system that is designed in layers?

While the conflict exists both system and organization under perform because time and energy are spent living the conflict, managing the conflict, overcoming the conflict.

What would the object-oriented company look like?

If we accept that object oriented design and programming are superior to procedural programming (and in general I do although I miss Modula-2) then it becomes necessary to change the organization to match the software design – reverse Conway’s Law or Yawnoc. That means we need teams which look and behave like objects:

  • Teams are highly cohesive (staffed with various skills) and lightly coupled (dependencies are minimised and the team take responsibility)
  • Teams are responsible for a discrete part of the system end-to-end
  • Teams add value in their own right
  • Teams are free to vary organizational implementation behind well defined interface
  • Teams are tested, debugged and maintained: they have been through the storming phase, are now performing and are kept together

There are probably some more attributes I could add here, please make your own suggestions in the comments below.

To regular readers this should all sound familiar, I’ve been exposing these ideas for a while, they draw on software design and Amoeba management, I discuss them at greater length Xanpan, The Xanpan Appendix and now Continuous Digital – actually, Continuous Digital directly updates some chapters from the Appendix.

And like so many programmers have found over the years, classes which are named “Manager” are more than likely poorly named and poorly designed. Manager is a catch all name, the class might well be doing something very useful but it can be named better. If it isn’t doing anything useful, then maybe it should be refactored into something that is. Most likely the ManagerClass is doing a lot of useful stuff but it is far from clear that it all belongs together. (See the management mini-series.)

Sometimes managers or manager classes make sense, however both deserve closer examination. Are they vestige from the hierarchal world? Do they perform specialist functions which could be packaged and named better? Perhaps they are necessary, perhaps they are necessary for smoothing the conflict between the hierarchal organization and object oriented world.

Transaction costs can explain both managers and manager classes. There are various tasks which require knowledge of other tasks, or access to the same data. It is cheaper, and perhaps simpler, to put these diverse things together rather than pay the cost of spreading access out.

Of course if you accept the symbiosis of organization and code design then one should ask: what should the organization look like when the code is functional? What does the Lisp, Clojure or F# organization look like?

And, for that matter, what does the organization look like when you program in Prolog? What does a declarative organization look like?

Finally, I was about to ask about SQL, what does the relational organization look like, but I think you’ve already guessed the answer to this one: a matrix, probably a dysfunctional matrix.

Programmer’s Rorschach test Read More »