Archive for March, 2021

Toward e-commerce 2.0

March 25, 2021

Phil Windley explains e-commerce 1.0  in a single slide that says this:

One reason this happened is that client-server, aka calf-cow  (illustrated in Thinking outside the browser) has been the default format for all relationships on the Web, and cookies are required to maintain those relationships.  The result is a highly lopsided power asymmetry in which the calves have no more power than the cows give them. As a result,

  1. The calves have no easy way even to find  (much less to understand or create) the cookies in their browsers’ jars.
  2. The calves have no identity of their own, but instead have as many different identities as there are websites that know (via cookies) their visiting browsers. This gives them no independence, much less a place to stand like Archimedes, with a lever on the world. The browser may be a great tool, but it’s neither that place to stand, nor a sufficient lever. (Yes, it should have been, and maybe still could be; but meanwhile, it isn’t.)
  3. All the “agreements” the calves have with the websites’ cows leave no readable record on the calves’ side. This severely limits their capacity for dispute, which is required for a true relationship.
  4. There exists no independent way the calves to signal their intentions—such as interests in purchase, conditions for engagement, or the need to be left alone (which is how Brandeis and Warren define privacy).

In other words, the best we can do in e-commerce 1.0 is what the calf-cow system provides: ways for calves to depend utterly on means the cows provide. And some of those cows are mighty huge.

Nearly all of signaling between demand and supply remains trapped inside these silos and walled gardens. We search inside their systems, we are notified of product and service availability inside their systems, we make agreements inside their systems (to terms and conditions they provide and require), or privacy is dependent on their systems, and product and service delivery is handled either inside their systems or through allied and dependent systems.

Credit where due: an enormous amount of good has come out of these systems. But a far larger amount of good is MLOTT—money left on the table—because there is a boundless sum and variety of demand and supply that still cannot easily signal their interest, intentions of presence to each other in the digital world.

Putting that money on the table is our job in e-commerce 2.0.

So here is a challenge: tell us how we can do that without using browsers.

Some of us here do have ideas. But we’d like to hear from you first.


Cross-posted at the ProjectVRM blog, here.

Thinking outside the browser

March 21, 2021

Even if you’re on a phone, chances are you’re reading this in a browser.

Chances are also that most of what you do online is through a browser.

Hell, many—maybe even most—of the apps you use on your phone use the Webkit browser engine. Meaning they’re browsers too.

And, of course, I’m writing this in a browser.

Which, alas, is subordinate by design. That’s because, while the Internet at its base is a word-wide collection of peers, the Web that runs on it is a collection of servers to which we are mere clients. The model is an old mainframe one called client-server. This is actually more of a calf-cow arrangement than a peer-to-peer one:

The reason we don’t feel like cattle is that the base functions of a browser work fine, and misdirect us away from the actual subordination of personal agency and autonomy that’s also taking place.

See, the Web invented by Tim Berners-Lee was just a way for one person to look at another’s documents over the Internet. And that it still is. When you “go to” or “visit” a website, you don’t go anywhere. Instead, you request a file. Even when you’re watching or listening to an audio or video stream, what actually happens is that a file unfurls itself into your browser.

What you typically expect when you go to a website is typically the file called a page. You also expect that page will bring a payload of other files: ones providing graphics, video clips, or whatever. You might also expect the site to remember that you’ve been there before, or that you’re a subscriber to the site’s services.

You may also understand that the site remembers you because your browser carries a “cookie” the site put there, to helps the site remember what’s called “state,” so the browser and the site can renew their acquaintance with every visit. It is for this simple purpose that Lou Montulli invented the cookie in the first place, back in 1994. Lou got that idea because the client-server model puts the most agency on the server’s side, and in the dial-up world of the time, that made the most sense.

Alas, even though we now live in a world where there can be boundless intelligence on the individual’s side, and there is far more capacious communication bandwidth between network nodes, damn near everyone continues to presume a near-absolute power asymmetry between clients and servers, calves and cows, people and sites. It’s also why today when you go to a site and it asks you to accept its use of cookies, something unknown to you (presumably—you can’t tell) remembers that “agreement” and its settings, and you don’t—even though there is no reason why you shouldn’t or couldn’t. It doesn’t even occur to the inventors and maintainers of cookie acceptance systems that a mere “user” should have a way to record, revisit or audit the “agreement.” All they want is what the law now requires of them: your “consent.”

This near-absolute power asymmetry between the Web’s calves and cows is also why you typically get a vast payload of spyware when your browser simply asks to see whatever it is you actually want from the website.  To see how big that payload can be, I highly recommend a tool called PageXray, from Fou Analytics, run by Dr. Augustine Fou (aka @acfou). For a test run, try PageXray on the Daily Mail’s U.S. home page, and you’ll see that you’re also getting this huge payload of stuff you didn’t ask for:

Adserver Requests: 756
Tracking Requests: 492
Other Requests: 184

The visualization looks like this:

This is how, as Richard Whitt perfectly puts it, “the browser is actually browsing us.”

All those requests, most of which are for personal data of some kind, come in the form of cookies and similar files. The visual above shows how information about you spreads out to a nearly countless number of third parties and dependents on those. And, while these cookies are stored by your browser, they are meant to be readable only by the server or one or more of its third parties.

This is the icky heart of the e-commerce “ecosystem” today.

By the way, and to be fair, two of the browsers in the graphic above—Epic and Tor—by default disclose as little as possible about you and your equipment to the sites you visit. Others have privacy features and settings. But getting past the whole calf-cow system is the real problem we need to solve.


Cross-posted at the Customer Commons blog, here.

Is being less tasty vegetables our best strategy?

March 8, 2021

We are now being farmed by business. The pretense of the “customer is king” is now more like “the customer is a vegetable” — Adrian Gropper

That’s a vivid way to put the problem.

There are many approaches to solutions as well. One is suggested today in the latest by @_KarenHao in MIT Technology Review, titled

How to poison the data that Big Tech uses to surveil you:
Algorithms are meaningless without good data. The public can exploit that to demand change.

An  excerpt:

In a new paper being presented at the Association for Computing Machinery’s Fairness, Accountability, and Transparency conference next week, researchers including PhD students Nicholas Vincent and Hanlin Li propose three ways the public can exploit this to their advantage:
Data strikes, inspired by the idea of labor strikes, which involve withholding or deleting your data so a tech firm cannot use it—leaving a platform or installing privacy tools, for instance.
Data poisoning, which involves contributing meaningless or harmful data. AdNauseam, for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.
Conscious data contribution, which involves giving meaningful data to the competitor of a platform you want to protest, such as by uploading your Facebook photos to Tumblr instead.
People already use many of these tactics to protect their own privacy. If you’ve ever used an ad blocker or another browser extension that modifies your search results to exclude certain websites, you’ve engaged in data striking and reclaimed some agency over the use of your data. But as Hill found, sporadic individual actions like these don’t do much to get tech giants to change their behaviors.
What if millions of people were to coordinate to poison a tech giant’s data well, though? That might just give them some leverage to assert their demands.

The sourced paper* is titled Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies, and concludes,

In this paper, we presented a framework for using “data leverage” to give the public more influence over technology company behavior. Drawing on a variety of research areas, we described and assessed the “data levers” available to the public. We highlighted key areas where researchers and policymakers can amplify data leverage and work to ensure data leverage distributes power more broadly than is the case in the status quo.

I am all for screwing with overlords, and the authors suggest some fun approaches. Hell, we should all be doing whatever it takes, lawfully (and there is a lot of easement around that) to stop rampant violation of our privacy—and not just by technology companies. The customers of those companies, which include every website that puts up a cookie notice that nudges visitors into agreeing to be tracked all over the Web (in observance of the letter of the GDPR, while screwing its spirit), are also deserving of corrective measures. Same goes for governments who harvest private data themselves, or gather it from others without our knowledge or permission.

My problem with the framing of the paper and the story is that both start with the assumption that we are all so weak and disadvantaged that our only choices are: 1) to screw with the status quo to reduce its harms; and 2) to seek relief from policymakers.  While those choices are good, they are hardly the only ones.

Some context: wanton privacy violations in our digital world has only been going on for a little more than a decade, and that world is itself barely more than  a couple dozen years old (dating from the appearance of e-commerce in 1995). We will also remain digital as well as physical beings for the next few decades or centuries.

So we need more than these kinds of prescriptive solutions. For example, real privacy tech of our own, that starts with giving us the digital versions of the privacy protections we have enjoyed in the physical world for millennia: clothing, shelter, doors with locks, and windows with curtains or shutters.

We have been on that case with ProjectVRM since 2006, and there are many developments in progress. Some even comport with our Privacy Manifesto (a work in progress that welcomes improvement).

As we work on those, and think about throwing spanners into the works of overlords, it may also help to bear in mind one of Craig Burton‘s aphorisms: “Resistance creates existence.” What he means is that you can give strength to an opponent by fighting it directly. He applied that advice in the ’80s at Novell by embracing 3Com, Microsoft and other market opponents, inventing approaches that marginalized or obsolesced their businesses.

I doubt that will happen in this case. Resisting privacy violations has already had lots of positive results. But we do have a looong way to go.

Personally, I welcome throwing a Theia.


* The full list of authors is Nicholas Vincent, Hanlin Li (@hanlinliii), Nicole Tilly and Brent Hecht (@bhecht) of Northwestern University, and Stevie Chancellor (@snchencellor) of the University of Minnesota,