Finding the Right Platform Thought Experiment

In this thought experiment, to help find the right platform for me to study, I’m going to evaluate various options in the context of an educational VR app. The purpose of the app will be to help high school students learn about physics, orbital mechanics, and the solar system. It would allow students to modify orbits, planetary properties, and send missions from Earth to Mars and elsewhere.

I’ve already pretty much set my mind on taking the Unity Immersion course and working on the Vive, because I have one and because I really want to build deeply immersive worlds, not so much mobile type apps. However, it would be shortsighted to not explore other platforms, so I look forward to having my preconceptions challenged.

Persona

Age: 16
Occupation: Student
Name: Harvey Pruitt
Quote: “School is so boring, you just sit and listen to people talk for hours, but don’t actually do anything.”
VR Experience Level: Intermediate
Motivation: Harvey is a smart student, but is bored just listening to lectures. He wants to build and make things — to do something other than just sit there.

Considerations

Price of platforms:

How accessible would each VR platform be to your target student in terms of price? Take into account location, age, and income.

  • Cardboard: Cost $400-600 for the phone, + $15 for headset if not DIY
  • Daydream: Cost $400-600 for the phone, $80 for headset (possibly much less if a standalone headset comes out soon)
  • Oculus VR: $500 for headset; probably need a computer around $300-400
  • Vive: $600 for headset; probably need $600 in the PC

Cardboard & Daydream would be the most accessible, as most people already have a smartphone; even if a parent would not let their kid have a smartphone, they will likely own one themselves and if it’s for educational purposes would gladly let their child use their phone. When the standalone Daydream headsets come out, parents might buy one for their children. Schools would be more likely to invest in multiple Daydream headsets as opposed to the Oculus and Vive, since they will be cheaper, lack cables, and not require complicated setup and teardown procedures, making them more versatile.

Oculus and Vive would be more expensive, and likely would only be owned by middle-class families. This could mean that lower income families and those that live in inner cities might not be able to take advantage. Schools might buy the headsets, but at those prices, they’re not going to buy a terrible number of them, limiting how many can use it at one time.

Interactivity:

How interactive does your lesson need to be? For example, do I need to pick things up or could I get away with just looking at objects?

To understand how orbital mechanics work, you’re going to need to be able to move objects and see how their paths change. You might also want to edit the properties of planets and moons, such as their density and mass, to see how that affects gravity. Being able to make denser, more massive objects “feel” heavy to the student can drive the point home. This would likely require some sort of controller, at a minimum the Daydream one, which can simulate this (as per the Daydream Elements demo.) A Cardboard headset would probably be unable to simulate it.

However, one could do an app where the student just uses the sight reticle and work with a menu. Though cumbersome and probably annoying, it is stil possible.

Realism:

How realistic do your visuals need to be in order to teach? For example, could I use 2D images and videos in a 3D Environment or do you need high poly 3D models.

The graphics would not have to be too advanced for this sort of app. High resolution textures for planets, moon, and the Sun would be preferable, but as the student would likely be positioned in the virtual solar system as a colossal godlike being, that would make up for having less-than-cutting-edge graphics and particle effects. In fact, if you’re doing pure physics, you can even leave out solar system entirely, and make a completely made up one using low poly models.

Active vs Passive:

Does my student need to feel like a participant in the experience or can they be a passive viewer? Could they be both?

The student should be as active as possible in the app. Although observation will be required at times, to try and learn how things work, the student should be able to quickly modify and play around with the settings as soon as possible. The entire point is to get the student more involved, less bored, and doing something rather than just sitting there.

Conclusion:

Given the answers above, what are potential platforms you could use for your experience?

Based on the above considerations, potential platforms would be the Daydream, Oculus, and Vive. Both the Oculus and Vive would be able to process the app ably. Daydream would also be able to do so, and while it would have less capabilities, it should still be able to handle it fine. Cardboard would be too annoying for the student to use, not having a separate controller and being too limited.

I’m still focused on building for the Vive, but I was unaware of the capabilities of the Daydream system. The standalone headsets that are coming surprised me; they could do quite a lot for building VR worlds. I would like to explore Daydream in the future, as it looks like an affordable yet capable alternative to the Vive, thus having a much larger potential audience for applications. However, as I do not have a true Daydream capable device, I will stick to taking the Unity Immersion course and working with my Vive.

A Compromise in the Ad Wars

Begun, the Ad Wars have.

Long have web users been frustrated with advertisements on the web. They’re intrusive, both on the screen and in your data; annoying; obnoxiously loud; and frequently don’t offer anything we’re interested in. (And when they do, we find the tracking pretty damn creepy.) The annoyance has gone so far that Apple has baked in ad-blocking into the new version of Safari for iOS, which has sent everyone in the advertising and web content businesses into a panic. For instance:


The article linked in the tweet is a good one by Nilay Patel of The Verge, where he explains why ads are important to web producers (as well as how this is really just another salvo in the endless Apple vs. Google match.) There’s also this article in Advertising Age which displays a stunning amount of ignorance from advertisers, though I suppose it isn’t that stunning when this is literally how they put food on the table.

In short, we need to keep the ads there in order to fund the content we want to read. As Patel puts it (emphasis in original):

Those huge chunks — the ads! — are almost certainly the part you don’t want. What you want is the content, hot sticky content, snaking its way around your body and mainlining itself directly into your brain. Plug that RSS firehose straight into your optic nerve and surf surf surf ’til you die.

Unfortunately, the ads pay for all that content, an uneasy compromise between the real cost of media production and the prices consumers are willing to pay that has existed since the first human scratched the first antelope on a wall somewhere. Media has always compromised user experience for advertising: that’s why magazine stories are abruptly continued on page 96, and why 30-minute sitcoms are really just 22 minutes long. Media companies put advertising in the path of your attention, and those interruptions are a valuable product. Your attention is a valuable product.

For better or worse, he’s right. The ads pay, and not well, but they pay enough to keep a lot of publishers in business giving you great content. The problem, though, is that many of these ads are horribly invasive. You first have the ads that completely block the screen and won’t let you continue for ten seconds. Now that doesn’t seem like a long time, but it is when you’re just trying to browse a news story; why wait ten seconds? Just close the tab and go elsewhere. Then there autoplaying video ads, which not only intrude into your music if you’re on Spotify or listening to VLC Player, but can also bother other people; imagine you just got your kid to sleep, you’re looking at something on the laptop, and a video ad plays and wakes the baby up. Or you’re at work, and once just starts playing in the office, going up and down the hall while people are on phone calls or working. It doesn’t even have to be offensive or vulgar; the very act of intruding into the environment beyond your screen is already offensive. Those ads work on TV because that’s the whole point of TV and we’re expecting it; the same expectations do not hold up to the web. And then there are those ads that aren’t there at first, which then appear, expand, and totally move everything around on screen. You know what I’m talking about, whether they’re the expanding banners above the navigation menu or videos that open up in the middle of the story itself. What if you’re going to click on a link, the video moves it all around and you end up clicking something completely different. That’s just frustrating, and the last thing anyone should be doing is making a dirt simple task like browsing the web frustrating. It defeats the entire purpose.

Naturally, this is before we get the part about data tracking and taking up more bandwidth from people’s accounts. Especially for those with slow and not terribly great Internet connections, that’s just downright rude.

I think that’s the really bad part of ads. We’re not terribly concerned (I think; I could be wrong) about simple silent visual ads on the side, or even one at the top that loads in with everything else and doesn’t move around DOM elements as you’re reading. All of us who’ve used the Internet over the past 10 years have dealt with those, and haven’t minded them at all. So in the spirit of goodwill and making the Internet a better place, I propose a compromise.

  1. Advertising will be permitted on websites, EXCEPT FOR THE FOLLOWING:
  2. Any ad that overlays the website and blocks viewing of the content for any length of time
  3. Any ad that plays audio and video without being selected to do so by the conscious effort of the user
  4. Any ad that manipulates the Document Object Model (aka the page or the DOM) to move elements after the page has loaded

Naturally, advertisers and publishers will likely scream bloody murder, as these do a wonderful job capturing our attention — our negative attention — and getting our eyeballs. Taking them away will probably result in a drop in revenue, and will likely result in the industry undergoing a bit of a shift. We’ll be cutting out a lot of advertising for this. Patel notes this at the end of his piece:

And the collateral damage of that war — of Apple going after Google’s revenue platform — is going to include the web, and in particular any small publisher on the web that can’t invest in proprietary platform distribution, native advertising, and the type of media wining-and-dining it takes to secure favorable distribution deals on proprietary platforms. It is going to be a bloodbath of independent media.

[…]

But taking money and attention away from the web means that the pace of web innovation will slow to a crawl. Innovation tends to follow the money, after all! And asking most small- to medium-sized sites to weather that change without dramatic consequences is utterly foolish. Just look at the number of small sites that have shut down this year: GigaOm. The Dissolve. Casey Johnston wrote a great piece for The Awl about ad blockers, in which The Awl’s publisher noted that “seventy-five to eighty-five percent” of the site’s ads could be blocked. What happens to a small company when you take away 75 to 85 percent of its revenue opportunities in the name of user experience? Who’s going to make all that content we love so much, and what will it look like if it only makes money on proprietary platforms?

There are numerous problems with Patel’s analysis. The first is the implicit notion that small sites are entitled to be alive, entitled to getting ad money out of you, and entitled to your eyeballs. This is most certainly not the case. Yes, it’s true that with a lesser amount of ads, many sites might go under. But my response to that is:

So what?

Most of the stuff on the Internet is pure dreck. Lots of sites regurgitate other websites’ stories without adding any actual value or new information; others just churn out nonsense articles that make you ask yourself if you wasted the five minutes reading them. For an example of the latter, check out this “story” from a site called “Neurogadget” on the Microsoft Surface Pro 4. There is literally no substance to the story; it’s 310 ten words basically say, “The first Surface wasn’t great, the second was a little better, the third was really awesome, and the fourth is probably going to be really super awesome.” That’s it; no specs, no data, just a lot of empty fluff.

For an example of the former, take any story from Mediaite (perhaps not fair, as they do cover the media), HuffPo, or any other big or even medium sized website. I’ve seen them, the stories that quote liberally from another story, and basically add nothing more than another way to rephrase the story, and maybe a few links (as this story by HuffPo, which is far from the worst, does.)

This is not content. This is not information. It’s just noise. I would say that about 80% of the stuff on the internet is just junk, and it deserves to go away permanently. Tell me, what did GigaOm provide that sites like CNET and PCMag did not? We don’t need this noise, and frankly, nobody ultimately cares. It will not be a huge loss to humanity, we will simply move on. (Plus, there is a legitimate question over whether The Awl is short for The Awful.)

Another problem is that GigaOm and The Dissolve went out before the ad blocking controversy began. That means that the current ad environment could not save them anyways, so that part is pretty much moot. Whether or not Apple comes up baked in ad blocking is utterly irrelevant, as they failed anyways. So that tells me that ad blocking, at least the institutionalized form everyone is arguing over now, doesn’t really matter.

But then finally, the last nail in the coffin is that for years, we’ve had great content provided sans ads, and you know what? We still get great content today. The best websites are those that don’t have ads, or a minimal amount. Orion’s Arm is one of my favorite websites, and it doesn’t have ads. Neither does my friend’s blog. Reddit keeps a close handle on ads, and doesn’t let them pop up. Wait But Why does have pop-ups, but they’re easily dismissable, and none of the play any noise.

The best content is usually unpaid, for the precise reason that it isn’t rushed to make some deadline, isn’t done to just be clickbait and get ad money, and instead has passion and thoughtfulness infusing it. If you’re writing something without being paid or compensated by ads, you’re doing it because you really have something to say, something you care about. That makes these things far better, and makes the ad-supported content look kinda terrible in comparison.

So I won’t weep for a loss of these sites. It’ll be the market and consumer demand weeding them out, and that’s a good thing. The clickbait headlines and stories rushed out immediately in order to get advertising clicks have, probably, made our society much dumber. We no longer take the time to think, we must spew something forth immediately, damn the truth! So maybe blocking these kinds of ads will also lead us to slow down, think, actually make decisions and not just blindly throw something, anything, on a page to get those ad clicks.

On the other hand, viewers will probably hate me because there are two things I didn’t block in the compromise: tracking and bandwidth. The latter because every web developer should be minimizing bandwidth usage by default; it’s been a terrible, hateful trend lately to absorb as much bandwidth as possible with cool animations and whatnot, but developers should be designing the most efficient sites possible. As for tracking…well, I’m convinced that by and large, privacy as we know it is dead. The next generation will know nothing of it, and in the long run it’s a lost cause. Big data is here and it will stay. Sure, you can use extensions to block that stuff if you want, but there will always be data flowing around. I don’t consider it something stoppable.

But terrible ads can be stopped. And so can these stupid ad wars, which just illustrate that a huge sector of the web provider industry knows next to nothing about its users.

A Plethora of Links to End 2014

2014 is just about gone, and for the large part, I say: Good riddance. In many ways, 2014 was an awful year for civil liberties, freedom, and for people in general. Yet on the other hand, there are some positive things to report.

One of my 2015 resolutions is to stop posting so much political stuff. I know, I know – I say this almost every month, and yet it never happens. I'm going to try, though, this year, especially since I'm making the effort to make some resolutions. (I'm even going to print them out and put them up on my wall in my bedroom and in my office.) So in honor of that, I wanted to post some last political and semi-political links before the year ended, links that have been sitting on my mind:

THE BAD

2014 was a really rotten year for privacy, civil liberties, and in particular for public-police relations. For a long time I thought of writing up a list of all the issues of police overreach and brutality, but I don't have to. Radley Balko, one of the best journalists on the planet, rounded up 2014's civil liberties violations as a "Let me give some predictions for 2015" post. It's chilling to think that, in the nation that is supposedly the leader of the free world, we have so many horrible things going on – most, but not all, being conducted by state and local governments.

I mean, seizing someone's assets, then charging them with a crime, so they can't pay for their own defense? Arresting parents for letting their kids play without supervision? Claiming that your SWAT team is a private corporation and is thus immune to open records laws? Push for extrajudicial tribunals for people who may or may not commit crimes against a certain class of individuals, tribunals where "innocent until proven guilty" and the rule of law are thrown out the airlock? Punishing people who haven't been convicted of a crime?

These are not the signs of a healthy liberal democracy, they're the signs of a damaged one that needs repair, fast.

One story in particular has stood out to me. As many have defended the police in the recent incidents and stories, one thing they may have failed to notice is that even black police officers feel threatened by the "boys in blue". I think once cops are fearful of other cops, then we have indisputable proof that there is a serious problem. And yet people still ignore it. Read the link above for a maddening, frustrating look at what is wrong with policing today. (That one really grinds my gourd, because I think it will be ignored by most.)

Meanwhile, on the other side of the coin, in 2014 progressives became nattering nabobs of negativity – or, in other words, conservatives. reason magazine highlights how 2014 heralded the return of "Neo-Victorianism", and I'm thankful that Elizabeth Nolan Brown wrote that article because I've been struggling to find the right word for this new trend. It's the trend of using coercion and bullying to enforce a set of social norms, mostly deployed by feminists, it seems. The four major areas are increasing art censorship, a hysteria over sex-trafficking (that trampled over individual rights while simultaneously punishing sex workers, many of whom don't think they're victims and like their jobs, thank you very much), a dragging out of hate speech to absurd lengths that means you shouldn't say anything that could potentially offend anyone at any time, and a trend of treating women as dainty little flowers that need to be coddled and protected rather than being allowed to develop into strong and independent individuals.

It's all rather sickening. It too, is not a sign of a healthy democracy.

And let's not get me started on the various abuses by the NSA. Let's just not go there for once.

The Good

There are, however, some great things to look forward to in 2015 that continue from 2014.

The first is in terms of war and crime. Steven Pinker, a wonderful academic, details in a great article for Slate that planet Earth is actually becoming a very peaceful world. I found the article particularly interesting for the following tidbit:

But the red curve in the graph shows a recent development that is less benign: The number of wars jumped from four in 2010—the lowest total since the end of World War II—to seven in 2013. These wars were fought in Afghanistan, the Democratic Republic of the Congo, Iraq, Nigeria, Pakistan, South Sudan, and Syria. Conflict data for 2014 will not be available until next year, but we already know that four new wars broke out in the past 12 months, for a total of 11. The jump from 2010 to 2014, the steepest since the end of the Cold War, has brought us to the highest number of wars since 2000.
[…]
The 2010–2014 upsurge is circumscribed in a second way. In seven of the 11 wars that flared during this period, radical Islamist groups were one of the warring parties: Afghanistan, Pakistan, Israel/Gaza, Iraq, Nigeria, Syria, and Yemen. (Indeed, absent the Islamist conflicts, there would have been no increase in wars in the last few years, with just two in 2013 and three in 2014.) This reflects a broader trend.

That "broader trend" being religious hostilities, with "all but two of these countries" having those hostilities being "associated with extremist Islamist groups." I always find myself on a narrow tightrope when it comes to Islamism; on the one hand, I always find conservatives are far too hostile and kneejerk when they want to just fight Muslims and bomb them; on the other hand, I think that many libertarians and leftists slide Islam's problems under the rug and prefer not to notice. Don't kid yourselves, guys: although Christianity has issues, it has largely been tamed and neutered by modernity. Islam hasn't. And Islam has got loads of problems.

But even despite that, the world is far more peaceful than the news reports make it out to be. Outside of the Middle East, we have the conflict in Ukraine – and that has basically been frozen. The drop in oil prices has crushed the Russian economy, so I don't know if Putin will continue to help his "allies" in Donetsk and Lugansk. There are conflicts in the Democratic Republic of the Congo and Nigeria, but to be honest I know very little about them.

Meanwhile, Fraser Nelson in The Spectator (UK) reports on how we're winning the war on disease. In 1990, diseases claimed roughly 37,500 years of life per 100,000 people; now they claim only about 26,000 (judging by my eyes on that chart.) Starvation has dropped by over ten percentage points. Infant mortality has plummeted. These are all extremely good news to hear.

The last one is a story on upcoming disruptive technologies, many of which are going to build on 2014 discoveries. I post this one because I have a bit of a quibble with the author, Vivek Wadhwa. Although I think most of his points are relatively sound, inasmuch as I, not being an expert in these areas, could judge them, his section on energy has problems. First, he leads off by saying that fracking is a harmful technology – newsflash, it isn't. Second, he says that solar power will hit grid parity by 2020, which I think is unlikely considering how expensive solar power is. (Seriously, the people I know who study energy saw a similar story by Wadwha and they claimed it hurt their brains.) Third, Wadwha claims that if we have unlimited energy,

we can have unlimited clean water, because we can simply boil as much ocean water as we want. We can afford to grow food locally in vertical farms. This can be 100 percent organic, because we won’t need insecticides in the sealed farm buildings. Imagine also being able to 3D print meat and not having to slaughter animals. This will transform and disrupt agriculture and the entire food-production industry.

Wadwha might be right about unlimited energy and unlimited clean water, but even if he is, the rest doesn't follow. Water isn't the only resource. Why would we grow food locally? It's not necessarily more efficient than growing food on larger farms elsewhere. Secondly, what about the time involved? When Wadwha says "locally," I see the localist woo argument about people growing food in their backyards. But that takes time, and who wants to waste time growing your own food when you can buy it at the store and instead spend your time going to sports events, watching TV, writing blog posts, or going on romantic getaways? Wadwha ignores that, and it hurts, both his piece and my head.

I'm also a little miffed he didn't mentioned Lockheed Martin's new fusion reactor project (more on that later), but I totally agree with him on synthetic meat – which I think will be a huge advance – and he makes good points about 3D printing, finance, and healthcare. In all areas, we're talking about some radical decentralization.

The Awesome

Okay, the last bit. The really cool stuff.

Scientists did some really cool things in 2014. I mean, some really scifi things. Quantum teleportation for instaneous communication, blood based nanites to repair your body, 3D food printers, hoverboards – 2014 was a really cool year for tech.

Meanwhile, the one news item that really made me jump was Lockheed Martin's announcement that in five years they'll have a prototype for a commercial fusion reactor. There are a lot of questions and criticisms of this, with many having doubts – but if anyone is going to deliver a power source that is clean and nearly limitless, it's going to be Lockheed Martin. And I hope it turns out correct, because I think that:

  1. It would provide enough energy to avoid the coming energy shortfalls as our iCivilization keeps getting bigger
  2. It would go a long way towards making climate change a nonissue
  3. It would go a long way towards getting the US out of the Middle East as we wouldn't have to worry about the oil reserves there
  4. It would weaken OPEC, Venezuela, and Russia (yes that's a cheap geopolitical shot but I think it's valid)
  5. A fusion rocket could get us from Earth to Mars in 30 days rather than six months
  6. It could power the warp drive that NASA is working on
  7. As energy is one of the largest input costs, this could make everything cheaper across the board by a considerable factor
  8. Bonus – Gundams.

I'm really hoping that 2015 will turn out to be even cooler.

And finally, for one last speculative item, there's a guy in Nebraska building a warp drive in his garage. Okay, okay, it's pretty far out there, man, but when you read stuff like this:

He turns around and points to the back of his garage door, where a red laser — beamed at the weight and reflected back against the door to demonstrate the movement happening in the case — drifts from its original spot. Slowly, in incremental amounts, the weight is drawn toward the V-shape motor.

You gotta wonder.

I’m With Elon: Let’s Colonize Mars

So Elon Musk wants to screw Earth and colonize Mars. Excellent, I completely agree. Let’s get started.

The interview Musk gave to Ross Anderson of Aeon Magazine is fantastic. It’s been a long time since I’ve read such a forceful advocacy for space colonization, which is refreshing. It seems like the cause of space has languished over the past couple of decades while people want to focus on more down to Earth matters. I think they’re forgetting that many of our down to Earth matters could probably be solved by going outward and exploring new frontiers – and settling them!

My reasons are different than Musk’s, are, though. Musk seems to be afraid that, since we haven’t discovered any interstellar aliens in our searches of the night sky, something bad must have happened to all of them:

Musk has a more sinister theory [to the Fermi Paradox, basically –Jeremy]. ‘The absence of any noticeable life may be an argument in favour of us being in a simulation,’ he told me. ‘Like when you’re playing an adventure game, and you can see the stars in the background, but you can’t ever get there. If it’s not a simulation, then maybe we’re in a lab and there’s some advanced alien civilisation that’s just watching how we develop, out of curiosity, like mould in a petri dish.’ Musk flipped through a few more possibilities, each packing a deeper existential chill than the last, until finally he came around to the import of it all. ‘If you look at our current technology level, something strange has to happen to civilisations, and I mean strange in a bad way,’ he said. ‘And it could be that there are a whole lot of dead, one-planet civilisations.’

Personally, I’m more in favor of the Great Filter being life itself. Wait But Why has a great blog post on the Fermi Paradox and all of its implications, and count me as a guy who thinks that life is much harder to happen than Ross Anderson seems to think (going off what he writes in Aeon; it might be he’s just summarizing what others think and that’s not his own opinion.) I don’t look at this as a bad thing; instead, we now have the entire cosmos open to ourselves. We are the Ancients, the Precursors, the Progenitors of life in a barren and empty universe.

But not if we screw it up before we get out there.

I’m not talking about the existential fears that most people talk about. I’m not worried about nuclear war or plague or global warming killing us. To be sure, we have some problems for this century: we need to stamp out religious and ideological extremism that leads to violence; find new and renewable sources of energy to keep powering our civilization; and maybe not build artificial superintelligences in our basements. But I think these (well, to one extent or another) are all manageable. The problem I fear is one of philosophy, political science, and sociology. We need space colonization to overcome the dimming of the (classical) liberal vision.

I’ve been thinking about this topic for a long, long time. Well, over a year, to be more exact, but it’s been fluttering in my head for longer. The problem is that I’m finding it very hard to put it into words why we must colonize Mars – and the rest of space – to preserve classical liberalism and by extension civilization, freedom, and all those good things.

I look at the growth of government over the past century and I see it as expansion turning inwards. There is less for us to go out and explore, now. We no longer have a frontier, a Wild West where the government’s arm is distant and individuals rely on themselves. It seems very romantic, because it is very romantic – and of course, there were problems. Colonization uprooted and destroyed indigenous cultures all over the world, caused pain and suffering by bringing diseases, bloodshed, and slavery. The Wild West was not as dangerous as the Western movie genre made it out to be, but there was racism, crime, and an eye for an eye mentality in some parts. My point, being, though, was that as there was a frontier, there was an argument for freedom. Government could not expand inwards on people because there was somewhere to expand outwards.

But then the 20th century came. By now, there was nowhere left to expand to. The only uncolonized parts of our world are the Artic, the Antartic, and the bottom of the oceans – the first two being extremely inhospitable and undesirable, the latter uninhabitable until somebody decides to invent SeaQuest in the real world. (Get on that, Musk.) Now, the expanding mass of government ran up against a solid wall, and as it hit this wall it folded back in on itself and expanded back towards its center. Now it was expanding on top of itself, layering itself upon itself, burying beneath itself the seeds of liberalism and freedom. Where else could it go now but onto its own people?

We lost the frontier. On top of that, we continued to multiply. I hate thinking in this manner, but the law of supply and demand comes back to haunt me. We have all these people now, and we keeping having more, and I wonder, as supply goes up, does demand go down? It used to be you could know everyone in your community. Now, do we just look at others as statistics? Not even fully autonomous human beings? Do we think everyone around us is a p-zombie? It seems very crass on one hand – how can we apply supply and demand to people – and yet very conservative on the other – here I am talking about community and how the modern era has increased the distance between us and yadda yadda yadda. Not being that sort of conservative – or really, any conservative at all – it’s hard for me to put this into words.

Unfortunately, I don’t have to. From China, we have a couple of videos and stories of how low human life is valued:

Then there was the toddler who was run over by two vehicles and ignored by scores of passersby before finally receiving help. Again, this is from China.

These are just the two things that come to the top of my mind. I don’t know if it’s because there are a lot of people in China, if there’s something deeper in Chinese culture, or if these are really bad examples. But that is what I think of when I see rising population. Is this something we can overcome? Is it bound to happen?

Then there is the issue of running out of work for people. I know many scoff at the idea, but there is some concern of “technological unemployment”. My friend Travis Thornton has blogged about this subject before. Now personally I am all in favor of a post-scarcity economy, and I think it’s absolutely delightful that we’re heading towards one…but are we going to need a new thing to give us meaning? Why can’t that thing be a settled, terraformed Mars?

The moon terraformed, covered in blue seas, green forests, and whispy white clouds.
I have to admit, a terraformed Luna would look cool.
TerraformedMoonFromEarth“. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

I realize these thoughts are not entirely coherent or cogent. Like I said, I’m having difficulty putting what I’m thinking and feeling into words. That’s why I’m doing this blog post, to solicit feedback and comments and see if I’m on the right track. But essentially, what I see is that, to preserve classical liberalism, individual freedom, and a culture of the same, we need to start colonizing planets. We need to go with Musk and start doing this right now. It doesn’t necessarily have to be Mars. We should also colonize the Moon (though terraforming it would be a waste of time I think, since it doesn’t have enough gravity to hold on to an atmosphere, unless you paraterraform), and we should probably also build O’Neill and McKendree Cylinders. Eventually, we might even terraform Venus, build Banks Orbitals and a Ringworld (okay, fine, we can have one Halo off in the corner for all the first person shooter types) and then from there…

The galaxy will be our oyster.

But not if we get stuck here. It’s not the asteroids that will kill us, or the threat of alien invasion, or potential nuclear war or grey goo or artificial superintelligence. If anything does us in, it will be the banal overlayering of bureaucratic, authoritarian government, run by busybodies and people of little vision. Humanity needs a new frontier, and there are many out there: uninhabited, barren, lifeless, ready for us to come. We need that frontier to rekindle our spirit of freedom, and get us moving again. Take the germ of liberalism, and spread it across the stars.

That’s my vision for the future. And that means I’m right there with Elon Musk. Let’s go to Mars.

Artificial Wombs & Virtual Childhood

About a month ago, transhumanist Zoltan Istvan – who created a bit of an unrelated bruhahaha in my Feedly – wrote about artificial wombs, saying they were inevitable and would do a lot of good for society:

Of all the transhumanist technologies coming in the near future, one stands out that both fascinates and perplexes people. It’s called ectogenesis: raising a fetus outside the human body in an artificial womb.

It has the possibility to change one of the most fundamental acts that most humans experience: the way people go about having children. It also has the possibility to change the way we view the female body and the field of reproductive rights.

Naturally, it’s a social and political minefield.

The whole article is a fascinating read. I don’t really have a stance one way or another towards it. I think many women would be happy to have their biological offspring be raised outside their body, if only because of the physical strain that takes place. Others (both women and men) probably would find that very notion offensive and decline to partake. Whatever. But I’m not here to really critique Zoltan’s particular view of transhumanism (other than I think his timetable is way too short.)

It was just, looking at this article, it made me think of another idea of transhumanism that has long bounced around inside of my brain. In a fictional form, it would go somewhat like this:

Ectogenesis was the first step towards radical reproductive liberation. The first two generations of ectogenesis children – derogatorily called the “pod people” by many – suffered social ostracism and persecution, but within fifty years roughly two thirds of all children were born in pods and the stigma disappeared.

But it wasn’t enough.

Ectogenesis was still time consuming, and you still had to raise the child after decanting. Studies in virtual brain emulation had long ago bore fruit, to the point where a identical simulation was possible. Scientists began a brave experiment where they took the genetic samples of two volunteers, combined it to form a zygote in a pod, then simultaneously created a virtual copy in a virtual environment. The virtual brain developed while the physical brain was not allowed to develop consciousness; the virtual brain was then, through nanoprobes, downloaded into the fetus after nine months of development.

That didn’t really change anything, but it did prove that virtual brain development was possible. The next generation of experiments went farther. Incorporating growth acceleration technology that had originally been perfected for orbital agriculture habitats, scientists were able to take the zygote straight to twenty years of age in only nine months. They also sped up the virtual environment, creating an entire society (complete with eidolons of the parents and relatives and real world people) that would be an effective “proving ground” for the growing mind. When all was said and done, the body and mind that emerged from the pod was chronologically only nine months old, but had twenty years of subjective physical and mental experience, and was ready to enjoy real life society immediately.

Of course, there were criticisms. More biological, “natural” humans saw this as too much of an aberration; the virtuals scoffed at the notion anyone would want to live in meatspace. Yet over time this mixed virtual-biological lifestyle took hold. Within six generations, roughly 85% of all humans had spent their first twenty subjective years in a virtual simulation before being downloaded into a specially grown body, derived from the genetic samples of two or more “parents.”

However, now the definition of human had changed dramatically. No longer were the “naturals” considered “natural,” they were merely “full-stack biologics” living mostly in segregated neighborhoods and even in some cases reservations. The virtual born – or “Virtborn” – became natural, but with it was a loss of emotion, a growing collective mindset, and a subsequent decline and fall in the arts and science. The “biologics,” in turn, began to develop increasingly eccentric cultural traits in order to “prove” they were the true humans, including bringing up ancient human practices such as zoot suits, black coffee, and a particularly brutal form of physical competition designed to identify “manly” qualities among males called “hockey.”

Okay, so I kinda let myself go at the end there. But the idea has been in my head for some time. I don’t know if it’s feasible – though it probably is. I’m also certain I didn’t come up with it and read about it elsewhere, though I can’t find anything about the topic at the moment.

I thought about it again when I attended a event on Sam Harris’ book tour. He was asked a question about computers and ethics, and he stated that (and here I must paraphrase for my memory is terrible) that, if we could replace a malfunctioning neuron in our brain with an artificial neuron that completely replicates the replaced neuron’s behavior, why not over time gradually replace all of them? And in that case, would we not have a fully artificial brain? And would not that brain be conscious?

That idea of just gradually replacing all our biology until we’re completely metal fascinates me. I mean, if that’s not “transhumanism,” I don’t know what is. Instead of an apocalyptic war between humanity and the machines, we instead have a gradual evolution from biological to synthetic life. Aside from the fascination, I’m not sure how I should feel about that. Is it a good thing? What will we give up to do that? But, since I’m a libertarian transhumanist, as long as it’s voluntary, it should be okay. I think most aspects of transhumanism are glorious and want to see them come about, to alleviate suffering and create more enjoyment. So long as we don’t have early adopters and retros blowing each other up…

Why I’m A Little Sad For Google+ (And What These Companies Are Doing Wrong)

Report: Google to end forced G+ integration, drastically cut division resources | Ars Technica

Google+ seems to be the red-headed stepchild of social media networks – an image not helped by the fact that its branding is all, well, red. But in any case, Google+ has never received the support and attention that other networks – hell, even LinkedIn – have received over the years. It’s been “that Google thing” and while you would think because it has freaking GOOGLE in the name everyone would be using it, they haven’t. Now, Google is appearing to “pivot” away from Google+, first with G+’s leader leaving the company, and then this news that possibly over 1,000 Google employees will be shifted around. According to the linked story, TechCrunch is calling Google+ the “walking dead.”

Yikes.

There are a lot of great things about Google+ that I like. This commenter on the post says a lot of them. One of the things s/he says I will also say: I do not live on Google+. But for an occasional trip, I can read a lot of interesting people’s interesting thoughts. People like Google’s own Larry Page and Sergey Brin, the inventor of Linux Linux Torvalds, a few podcasters I follow, and some very fascinating groups on roleplaying games, Linux, computer programming, and science fiction. Google+ also has no character limit, so you can write screeds if you wish, and it does seem like a breath of fresh air to get away from the increasingly stale and stultifying Facebook atmosphere and the hectic, sweltering rapid-fire heat of Twitter.

Perhaps the greatest part of Google+ from a publisher standpoint, however, is how well it integrates with Google search. +1 a story you like, and you push that story higher in the search rankings whenever someone searches for the story’s topic. +1 really helps get stories to the top, as they’re usually recommended by your friends (or, at least, in theory they are.) Of course, you can +1 your own stories and help they grow as well. It’s a fantastic idea.

Unfortunately, Google may some serious errors. One of them was the forced integration with the YouTube comments. Look, hateful, dumb, anonymous YouTube comments are an Internet institution. They are worse than even 4chan, and that’s almost an Xbox achievement. And yet people want them to be that way, and they don’t want their real names and faces attached. Destroying that pissed off a massive Internet community, one that’s arguably larger and more entrenched than even Facebook. Not a smart move, Google. Forced integration with other services beyond search was also starting to grind on a lot of people’s nerves.

My personal pet peeve, however, is how Google has been so damn stingy with the personal profile API. Although they allowed some services, such as Buffer (which I highly recommend, by the way) to post to Google+ pages, but so far it hasn’t let anyone else post to Google+ personal profiles. There was, for some time, an Android app called “Jift” that allowed you to simultaneously post to Twitter, Facebook, and Google+, but it was pulled after awhile (and it’s style of reading posts from all three services wasn’t that great to begin with.) This forces people to get out of their main app, which usually can post to Twitter and Facebook simultaneously (like, say, Plume), and go into the Google+ app. You know who wants to take that extra step? Nobody. Why not just do it all in one go? That would make it dramatically easier to use Google+, although it would cut down on the number of people actually using the Google+ app.

Which is why Google hates it.

This brings me to what I think all of these companies are doing wrong: they are not providing choices. That’s what a consumer wants, right? A consumer, in most instances, wants a choice to make. And I don’t see why companies can’t do this. You have Google+ forcing itself on YouTube commentators and in other places; you have Facebook now forcing people to get a separate app on their phone to use Facebook messages and soon a separate app for events and possibly even groups as well. You have them changing up user interfaces all the time with no option to go to an older one that some users prefer. It’s this forcing of things that is really starting to annoy people, though I wonder if it will truly bite these companies in the behind, because while annoyed, most people still go along with the changes. Still, how can providing your users more choices ever be a bad thing?

There’s the old saw that “The user isn’t the customer, the user is the product” but I don’t know how much I buy that. Facebook runs on its users and it can’t get anywhere without them, so it can’t annoy them too much (although the line of no return seems to keep receding into the distance day by day.) Google as well, though I’ll be honest I have no freaking clue how Twitter makes money. (They can’t be selling that many promoted accounts and tweets, can they?)

I hope Google+ sticks around. (I really hopes it overcomes Facebook some day, so I can actually escape.) I hope it continues. But we’ll have to see, won’t we?

Was I right about Bitcoin?

Last year, when writing about 2014 predictions, I made the following about Bitcoin:

Bitcoin will crash

This is just a hunch, but not too long ago we saw China ban Bitcoin transactions (as well as Thailand and India, basically) and the value of Bitcoin plunged about 40%. That’s pretty damn volatile, and while I am not a monetary expert, I think that’s not good for a currency. If the value of your currency changes that much so quickly (and if you put in terms of it’s rapid growth over 2013, which I think was 50x–if you bought $100 worth in January it was worth $5,000 at the end of the year) it’s not going to be very usable.

Now I could be totally wrong about that, and also totally wrong about this, but I have a hunch Bitcoin will dramatically decrease in value next year and might go out. I think it will also bring down other cryptocurrencies, including Litecoin, Peercoin, and the so-called “Dogecoin,” which is actually real. Although it’s also hit the reputation of these cryptocurrencies already…

Of course, everyone is ranting about the bankruptcy and shutdown of one of the largest Bitcoin exchanges, Mt. Gox, in Tokyo. There have also been major thefts and hacks elsewhere. Roughly $400 million worth of Bitcoin has been stolen and disappeared, amounting to somewhere around 6% (if I’ve read correctly) of the entire global Bitcoin supply. Meanwhile, Bitcoin has plunged from a price of $1,200 last November to about $573 as of this writing.

The concern over Mt. Gox and the ripple effects it’s collapse is having are making some wonder if this is the end of Bitcoin itself:

Documents purportedly leaked from the company lay out the scale of the problem. An 11-page “Crisis Strategy Draft” published on the blog of entrepreneur and Bitcoin enthusiast Ryan Selkis says that 740,000 bitcoins are missing from Mt. Gox, which roughly translates to hundreds of millions of dollars’ worth of losses, although figures are fuzzy given Bitcoin’s extreme volatility.

“At the risk of appearing hyperbolic, this could be the end of Bitcoin, at least for most of the public,” the draft said.

In a post to his blog, Selkis said that the document was handed to him by a “reliable source” and that several people close to the company had confirmed the figures. Reached by phone, he declined further comment. The Japanese government, meanwhile, has not announced any formal investigation.

Not all agree, however:

Now that what’s left of Mt. Gox’s credibility has been shredded, the company is unlikely to rebound even with a bailout from investors or other members of the Bitcoin community. That leaves the question of what comes next. A host of funded exchanges are ready to take its place. But for an exchange to regain users’ trust after the fall of Gox, it will need new transparency standards and safeguards, some of which have already been proposed.

Others see currency exchanges as a gap technology that the Bitcoin economy is poised to move beyond. Once people are being paid in Bitcoin and spending money in Bitcoin, there won’t be as much need to buy it for cash, which is the primary function of an exchange. “These large exchanges that are international and global are more important in the early stages of Bitcoin when we need price discovery,” says Jon Matonis, director of the Bitcoin Foundation. “You don’t need them in the long run because in a true Bitcoin economy you’ll have a closed-loop system.”

There are precedents for Bitcoin bailouts, as when the Polish exchange Bitomat accidentally erased all its customers’ bitcoins. Mt. Gox bought the company and reimbursed its customers. But a bailout of Mt. Gox would be contrary to Bitcoin’s libertarian ethos and it doesn’t seem necessary. The price has already rebounded. A number of well-funded, reputable Bitcoin companies are standing by ready to sell the idea to the vast majority of potential users who have never used the currency or heard of Mt. Gox. And anyone who wants a bailout is likely to be shamed by the sense of self-determinism that kickstarted Bitcoin in the first place.

Personally, I find myself more with the latter viewpoint than the former. I still think Bitcoin will go out; I just don’t think it will be Mt. Gox specifically. There are going to be other things bringing it down, namely further instances of major theft and fraud and bad management. There won’t be just one issue.

I do want to slightly modify part of my above prediction, namely this:

I think it will also bring down other cryptocurrencies, including Litecoin, Peercoin, and the so-called “Dogecoin,” which is actually real.

I’m not as convinced as this as I used to be. To explain why, let’s take a quote from one of Mt. Gox’s investors:

Roger Ver, a big investor in Mt. Gox, said he did not know if he would ever get any of his lost bitcoin back.

“But the important thing to realize is that Mt. Gox is just one company using bitcoin. The bitcoin technology itself is still absolutely amazing,” he said.

“Even if one email service provider is having a problem that doesn’t mean people are going to stop using email. It’s the same with bitcoin.”

Ver is right, but he’s not expanding it far enough. Even if Bitcoin fails, it will not completely destroy the general cryptocurrency technology or concept. The way I see it now, Bitcoin is basically going to be the sacrificial lamb. It will get a lot of attention, a lot of press–and it will hit every bloody pitfall on the way up and out. Imagine, if you will, a Macross Missile Massacre heading straight for a convoy of cryptocurrencies. Bitcoin will be that ship that takes all of the blows and is destroyed, but the rest of the convoy will keep going–many damaged, perhaps a few unscathed. And the developers of these cryptocurrencies will learn from Bitcoin’s failures and push out innovations that will improve their products. (Granted, most of these cryptocurrencies, if not all, are very decentralized, so there’s no one person controlling the network, but I think the users will work out something.)

Will these cryptocurrencies replace national currencies? Barring a catastrophic global monetary collapse–which, to be fair, there is enough stupidity going on at central banks and governments around the world that we can’t definitely rule that out–I would say no. People are going to use it for speculative purposes, for liquidity, and occasionally to buy things, but for the most part I just can’t see them being used for everyday transactions by the majority of the populace. That is going to take a lot of convincing to get people to move outside their comfort zone. For better or worse, being attached to a government makes a currency look “safe”. It will take magic or tragedy to sever that thought in people’s minds.

One thing I do find interesting about all this is a concept and organization known as Mastercoin. I visited their booth at the International Students for Liberty Conference this year and was fascinated. It’s an attempt to have a sort of meta-protocol on top of the Bitcoin–and other cryptocurrencies’–protocol. It would facilitate seemless transfers between cryptocurrencies and people, and allow the development of what they term “smart contracts.” An example would be if I and someone else had a bet on what the weather would be, and I had Bitcoins and she had Litecoins. The smart contract would check the weather, and instantly transfer over the money bet from one person to the other and convert it, without us knowing what the other person has in terms of currency. But it was a lot more than that; the guy I was talking to said that it would permit the creation of infinite cryptocurrencies which could be used to support any number of things. One example was a hypothetical “Mathcoin,” which would support mathematical research. If a company accepted Mathcoin and people used it to buy things, the usage of Mathcoin–and the mining, I think–would instrinsically support mathematical research.

I’m doing it a major disservice; the Conference was a couple of weeks ago (and it gave me a nasty cough I haven’t really recovered from) and it was a very complicated subject. But it was still fascinating that people are working on this kind of stuff right now. Will it turn into anything? I have no idea. But come on, isn’t this interesting?

Still, Bitcoin…that won’t make it. Pretty sure of it.

2014 Predictions Part 2: Science & Technology

So after giving my political predictions, let’s move on to something eminently more satisfying and cool: that which is in my headline.

Online services will get crappier

Perhaps it is only me, but the past two years have seen many great online services climb up the stupid tree, then fall down it and hit every branch on the way to–well, not the ground, but maybe the bedrock.

It’s a lot of little things. Twitter retired the 1.0 API which allowed web developers to organically integrate Twitter timelines into websites; the newer 1.1 API forced a lot of people to use the bog standard Twitter widget, which isn’t very customizable and looks ugly in many environments. (There is a way around it, using JavaScript, fortunately, but it took a lot of pain to find that workaround.) Twitter also recently started auto-expanding images in Tweets; have these guys seen the sickos on there? One of them may link to Fourier’s Gangrene (and no, I will not link to that because it’s a pretty gross image.) Facebook, meanwhile, keeps futzing with the layout, as well having the page dynamically update–a great obscenity generator when you’re in the middle of writing a comment and the whole page moves. (And let’s just not get started on the privacy crap.) Youtube implements DASH, preventing a video from actually loading and making it freeze all the time. And then Google+…

…just keeps being Google+, I guess.

Oh, and the rampant advertising…

It used to be that these services was useful, interesting, fun, and were only mildly annoying. But I’ve noticed that, especially in the last 18 months-2 years, they’ve gone downhill. I suspect a huge part of this is due to IPO’s, at least in Facebook’s and Twitter’s cases. Why? Because the incentive structure has shifted. Instead of delivering value to their users, these services are now focused on delivering value to their shareholders. There isn’t anything wrong with that, but it does lead to a lot of frustration for end users.

The thing is, these companies aren’t focused on delivering value anymore, and the shareholders, I fear, don’t realize that delivering value is the very way they make profits. In other industries, one way of making profits is by sucking up to the government teat and lobbying for special favors for yourself and special prohibitions for your opponents (affiliate link). That isn’t really viable in the social media sphere, however, at least not yet (thankfully; hopefully it will never be viable.)

Will it actually hurt these companies? To be honest…I doubt it. Yes there are always people who make #TwitterSuxSoBad and “Make Facebook Go Back To Its Old Non-Sucky Ways Group,” but they’re a minority. So I think in 2014 this trend will continue and we’ll continue to see more and more little annoyances mount. I also don’t think we’ll have any resolutions to privacy concerns from this year. While more people are aware of and upset over these things, and this will only increase next year, I doubt it will hit critical mass for anything significant to happen during the next year. Oh, sure, noises will be made to assuage folks and placate users, but anything dramatic? Not unless Google decides to unilaterally implement Lavabit-type encryption and tell everyone else to screw themselves.

Which probably wouldn’t happen.

Facebook will start to slow down

This is less of a prediction for next year and more of a prediction for 2015-2020, but what the heck, it will likely start next year. Whereas Twitter & Google will weather the various storms (controversial changes to blocking mechanisms, ending Google Reader) they’re facing, I’m far less certain about Facebook. Recently, a Business Insider writer wrote that Facebook is collapsing under it’s own weight. Milennials are dropping out of Facebook, and that doesn’t bode well for long-term viability.

I think next year we’ll see an uptick in stories about people leaving Facebook. (Lifehacker might even run an update on their old article about how to quit Facebook, or get a “minimalized” Facebook profile.) It won’t actually lead to an actual contraction in Facebook users, but growth will slow. Which will make shareholders more antsy than Barack Obama coming home after taking a selfie at a funeral with another woman.

Longer term, I think it spells doom. But then, everything ends. Facebook was launched in February 2004; it will definitely live to see it’s 10th birthday next year. When it finally goes I’m not sure; probably around 2019. Put your bets in the comments and let’s come back and see who wins.

We’ll see a 1TB solid state drive under $400

To me, this is a no-brainer, and kinda cheating because it’s so easy. Technology prices drop as people adopt these technologies; as demand increases, it becomes easier to make these things as economies of scale take over. (Distinguished economists can lambast me for what might be faulty economics.) Right now, a 1TB solid state drive (SSD) on Newegg costs between $540-$700 (there is one refurbished option for $150, but I think that must be some mistake.) These will definitely be under $400 next year, possibly as low as $320 (but not much lower than that, unless it’s part of a Newegg sale they have every three days.)

Also, we’ll probably see a 2TB SSD come out next year, probably costing around $750 at first and maybe bottoming out at $600 before the year is done.

2014 will be Linux’s year

Maybe I’m being way too optimistic, but I see a bunch of things coming together to benefit Linux.

A couple of things have been long-term developments. The first is really anecdotal, but in the past couple of years the number of people I’ve personally witnessed using Linux computers has jumped from absolute zero to about half a dozen. Granted, that’s small, and that’s just my personal experience, but it’s something. Next, there are way more Lifehacker articles on how to actually use Linux (usually Ubuntu because it’s supposedly very user friendly), and people out there are actually tinkering with this stuff. Third, Android, the most popular mobile OS in the world, is based on Linux; because of this, hardware manufacturers have had to rewrite their drivers to support the Linux kernel, and because of that Linux can now work on a lot more hardware combinations, without waiting for someone in the open source community to reverse engineer a driver. It used to be–back in 2007 when I first dabbled with Linux–that wireless was a hit and miss proposition. In 2013, I downloaded and installed Kubuntu and it worked perfectly. This is catching on and people are noticing.

Two other things, external to Linux, will help it along. The first is that, while Windows 8 is actually a great operating system, it’s also a total bust. It might actually be stalling Windows sales. In fact, Microsoft had previously announced it was ending Windows 7 sales next October; that decision has been reversed, no doubt because Windows 8 adoption has been about as real as one of the Surface commercials. Windows 7 is still a viable operating system. However, it does cost a lot of money.

And the other thing: Windows XP will finally go unsupported in April 2014. As they say on Twitter: #DOOOOOOOM.

According to NetMarketShare, on December 21st 2013, nearly a third of users were still running Windows XP. Most of those are either idiots, senior citizens and the like who can’t upgrade, or businesses with mission critical applications. (Ed Bott went over this pretty well at ZDnet.) Still, to me, that’s pretty insane that almost a third of people are using an operating system that is going to be 12 years old come next year. Granted, a huge bunch of that is in China (the above report was global) where nobody upgrades their blasted computers. Still, that’s incredible. And foolish. After April, Windows XP will be dangerous to use. Yes, there will still be security updates by third parties in the IT security business, but without official support from Redmond using XP will be dubious.

Enter Linux. While Windows 7 and 8 are costly, and can only run on modern systems (I think both require at least 2GB of RAM to operate, if I’m not mistaken), Linux is both free (usually) and runs on a wide range of hardware, including some really old stuff. Plus, because it’s open source, the code is there for anyone to see, meaning security holes are patched pretty quickly, instead of being held in secret at Microsoft’s engineering headquarters, so it’s more secure (generally, at least.) And for those who want support, there are paid support options. The most famous are probably Red Hat (whose Linux distribution is one of the major bases of the Linux family, next to Debian) and Canonical, who make Ubuntu, but Oracle and Novell also have their own Enterprise linux offerings (Oracle’s being based on Red Hat Linux.)

When XP goes down, people are going to have to move somewhere. Many are going to try Linux. And thanks to Android, Linux will work. So I think 2014 will be a good year for Linux and it might finally break out of the “only for nerds” trap it’s been living in for years. But that depends on a couple of things: good documentation, and if it comes preinstalled with Flash, MP3 codecs, and the like. If the documentation is terrible and if the free software fanatics prevent mainstream Linux distributions from coming with nonfree codecs, drivers, etc., then you can forget about any Linux gain. Nobody will go for that; they would rather pay out the nose than deal with an OS that ships without any support for their flash games or mice or monitors or printers (people still print, you know). So this is dependent on how much Linux teams work for it. If the effort is not made, the success will not happen. If the effort is made, I think Linux will see tremendous gains.

Now, what that will do I’m honestly not sure. It might make the Internet more secure….

Google+ is going to continue to have nobody on it

Sorry.

Bitcoin will crash

This is just a hunch, but not too long ago we saw China ban Bitcoin transactions (as well as Thailand and India, basically) and the value of Bitcoin plunged about 40%. That’s pretty damn volatile, and while I am not a monetary expert, I think that’s not good for a currency. If the value of your currency changes that much so quickly (and if you put in terms of it’s rapid growth over 2013, which I think was 50x–if you bought $100 worth in January it was worth $5,000 at the end of the year) it’s not going to be very usable.

Now I could be totally wrong about that, and also totally wrong about this, but I have a hunch Bitcoin will dramatically decrease in value next year and might go out. I think it will also bring down other cryptocurrencies, including Litecoin, Peercoin, and the so-called “Dogecoin,” which is actually real. Although it’s also hit the reputation of these cryptocurrencies already…

There are going to be some awesome technological advances next year

We’ve seen some incredible things the past year. Soylent, dataSTICKIES, massive curved TV’s, not to mention all these really cool technologies that modern scifi is ignoring.

I have no idea what specific technologies will emerge next year. But I really think that there will be some super-cool technological advances next year, whether those guys working on building a sun in New Jersey get anywhere, or the scientist who is refining the formulae for faster-than-light travel finds something, or something completely different.

One thing is clear: 2014 is going to be cool.

Why I’m Not A Fan of Apple

A couple of weeks ago I got into a debate on Facebook between Android and Apple. I have never really been a fan of Apple products, despite owning an iPod for two years and using an iMac and a Macbook Pro at work. There are many reasons why I prefer Android and/or Windows over Mac OS X and iOS. So let me go through them.

 

  Cost

The number one reason why I don’t like Apple products is that they are ungodly expensive. Looking at their current website, a Macbook Pro 15″ (non-Retina, which I’ll get to later) costs $1,799. A similar specced out PC computer from Newegg.com costs $629. That’s an $1170 difference, and I ask you: what is it you get from that $1170 in a Macbook that you don’t get in that Windows 8 ASUS laptop? “Build quality”? Please. You don’t buy a laptop because it looks cool or it’s attractive, and if you are then you’re fundamentally misunderstanding the point of a laptop. You buy motorcycles or cars because they look sweet and cool.

Computers are appliances. Sure, if they look nice, that’s a plus. But that’s not the core requirements. What you need and want is a good performing computer, with a relatively fast processor, lots of RAM, hard drive space, and the ability to run the programs that you want, at a good price point.

And quite frankly, after seeing scuff marks and other wear and tear signs appear on a Macbook, I’m not really convinced that’s all a great decision.

But it basically comes down to cost. Macbooks are not economical compared to PC laptops and even Linux machines. Frankly, buying a Macbook is a stupid financial decision unless you’re a professional or semi-professional media creator (music producer, photo editor, videographer, etc.).

Similar things can be said about Apple’s mobile products. While there is less of a difference in the mobile realm thanks to carrier subsidies and other factors, if you still look at the iPhone and, say, the Galaxy S4, you realize you’re getting a much better bang for your buck with the Galaxy. You get a faster processor, bigger screen, more memory, and more talk and stand-by time. So then, like anything, it comes down to subjective desires and values. But when you choose an iPhone over an S4, what are those values you are choosing?

 

  Looks Over Performance

I mentioned this before in the Cost section, but it bears repeating: for Apple, you’re buying looks over performance. In a sense, you’re buying sex. And do you really want to spend money on a whore?

Okay, that’s harsh (but the line demanded to be written.) Yet when you look at it, it’s clear. Apple invests a lot into making their devices look nice. The smooth aluminum of the Macbook, the sleek dark looks of the iPhones and the iPods, you do have to admit they are good looking. But again, as I said above, that’s not why you get a computer. That’s not even why you get a phone. You get a phone to call people, and in this smartphone era, to also browse the Internet a bit, check social media, text folks, listen to music and maybe watch videos,  and perhaps schedule tasks and other productivity tasks. It’s the same thing with computers, except you’re A) doing more and B) usually you’re actually creating content instead of being limited to just browsing it.

True, Apple did just introduce the iPhone 5c, which I see as a belated recognition that they have to stop being the 1% providers and actually have to supply the masses if they want to continue. For almost the entirety of Apple’s mobile product history, it was the sleek iPhones for the upper-class.

Think about what that means when you buy into it. You really want to show off that you have the latest iPhone? (That’s what iPhone fanboys do, don’t deny it.) How shallow must you be that you have to say that you have the absolute latest new smartphone from Apple? It’s simultaneously pretentious and pathetic. I may ask people about their phones from time to time, but it is a quiet matter, usually when I’m in the market for a new one.

This is not to say you should go out and buy a completely ugly phone. But looks are really not that important, especially in an era when we cover all of our phones in cases. Performance is what matters: can it do what you need it to do? And no, you don’t need your phone to woo a date over for you. If you’re relying on that, then you’ve already lost.

 

  Severely Limited Consumer Choice

This one probably isn’t going to matter to most Apple customers out there, but it’s one that really annoys me: significantly reduced consumer choice. You are very much locked into Apple’s idea when you buy an Apple product. With Android and Windows, you get significant choice and can lose yourself for quite some time evaluating what you can get with your money. I could not even count the number of Windows hardware configurations in existence right now. Hypothetically, I might be able to with Android phones (restricting the list to only those phones being currently manufactured and distributed) but I won’t because that would still be quite an endeavor. But you have a lot of choices, from budget computers for old grandmas to high-powered gaming monsters for teenage nutjobs to workstations for professional content creation and scientific work; from small phones with a QWERTY slide out keyboard for basic functions with a 3.2″ screen to BIG DAMN SCREENS with 6.4″ portals to the cyberworld with quad core processors and huge space for data and room for a mammoth microSD card up to 128GB. The possibilities are not quite endless, but they go pretty far.

Even after you purchase your device, you can engage in a lot of customization and, if you’re into it, “hacking” to make it your own. On my own Windows 8 laptop, I got a program that gave me back my start menu and start button, bypassing Windows 8 Metro interface completely. On my Android phone, for a long time I used something called “Smart Launcher,” which radically transformed the way that the Android OS looked and even, to a small degree, operated. It’s very flexible and gives me loads of choices for optimizing my user experience.

Apple….not so much.

For ages, Apple remained stuck on the 3.5″ screen size, competitors (and customers) be damned. Perhaps that why iOS’s market share has tumbled down to about 13.2% this year while Android is over 70%. (Not the only reason, mind you, but one of them.) Even as variety blossomed in screen sizes, from 2007 to 2012 all iPhones had 3.5″ screens. It wasn’t until the introduction of the iPhone 5 did they give you another choice, that of the 4″ iPhone. Yet there still wasn’t much choice; you couldn’t get anything with similar capabilities at a 3.5″ size if you wanted, you were stuck with the iPhone 4.

Another thing is that when you buy the product, there’s not much you can do to optimize it. Sure, you can jailbreak it, but that would void your warranty and is probably not really a good idea to do unless you have some idea what you’re doing (i.e., you’re an Apple engineer, you write for Lifehacker or The Verge, or you just read slashdot incessantly.) Remaining within the lines, you’re basically stuck with the standard iOS material. Now that probably doesn’t bother most people, and that’s fine. But it’s why I wouldn’t purchase an Apple product (well, not until they come up with at least one with a 5″ screen) and I do believe it’s a reason why Apple products are inferior.

One of the greatest things about capitalism is that it’s an economic system based on choice. You are free to choose who you do business with, what you buy, how much you buy of it, and so on and so forth. Companies that give their customers choices do really great work and post great numbers and earn a lot of profit. They succeed where others fail. Not only do they do the traditional job of an entrepreneur, by providing a means to alleviate a problem the customer has, they do so by giving the customer a number of choices of how to alleviate that problem. Regrettably, in the past couple of decades as crony capitalism advances and solidifies, displacing true free market capitalism, and the collusion between big business and big government grows, that has been changing. Companies stifle competition and in turn just give customers one-size-fits-all products, then tell them to “deal with it.” Consumer choice is being reduced across the board.

Apple has always taken that route with its products. You get limited customization with it’s Macintosh computers, and since they control the software and the hardware, and exercise tight control over their supply chains, there aren’t really any alternatives. (A company called Psystar tried to make their own Mac OS X computers in 2008, but they quickly went bankrupt from a combination of incompetence, lack of credibility, and getting strategically carpet-nuke-bombed by Apple in court.) Apple also exerts much more tighter control over its App Store than Google does over its Play Store. On the one hand this does mean this gets less garbage apps (supposedly), but on the other good apps don’t get approved and in the end you have fewer choices you can make.

I like choices. I like freedom. And Apple’s policy of being very restrictive and selective in what it gives you, and forcing you to do things one way, does not strike me as a good bargain. And if you can’t really use the product in your way, then why did you spend money on it in the first place?

 

  The Cult of Mac

Finally, the biggest part…the Cult of Mac.

The first commandment was "Thou shalt not use Flash." At least we agree on that.
The first commandment was “Thou shalt not use Flash.” At least we agree on that.

Two out of three Macintosh users are, to put it bluntly, feverish fanatic fanboys who fap to fabulous fantasises. They constantly go on and on about the superiority of Mac and iOS products, usually going on endlessly about how Windows always gets infected by viruses and how Mac OS X systems are impervious to assault. Nevermind that at hacker conferences Max OS X always gets hacked first, and that in 2008 a leading security guru called Macintosh users who said this “ignorant,” and that despite Mac’s alleged superiority it has less than 8% market share. It is manna from heaven, and if you do not use Mac, you are a heathen.

I wish I was making this up, but I’m not. Back in 2004 a journalist wrote a book called The Cult of Mac detailing the religious-like qualities of the Apple fandom. Uncyclopedia, a parody of Wikipedia, has a joke article with the image caption being “It’s goofy pseudo-religious iconography like this that makes this article so easy to write.” Nearly every comment thread about Apple has these crazy fanboys jump in with a vigorous defense of their company, to the point where you wonder if they’re trying to be funny or if they actually had someone spike their Monster Energy Drinks with psychotropics. (Not that Android fanboys are innocent; the Android v Apple wars are like a light-hearted version of American politics, where there are virtually no stakes whatsoever.) And even though there are massive issues with iOS 7 and it appears to be, yet again, half-baked, they still go on to say that the iPhone 5S is literally the greatest phone ever made and they’re going to really explode and overtake everybody! We’re being super serial, you guys!

Of course, every product line and company has a fanbase. There are fan forums dedicated to Suzuki motorcycles, Keurig coffee machines, and even IKEA (which is it’s own form of nuttery in some cases.) This is understandable, if you like a product and want it to succeed, you’ll naturally want to evangelize this to others so they buy it and keep it going. (Unless you’re a hipster. But I digress…) However, the degree of intensity many Apple fans go to is just absurd. They’ll completely ignore facts and just smear other products indiscriminately, especially anything Windows. They’ll spout ludicrous lines about how Windows is constantly infected with viruses and everyone everywhere has major problems all the time, etc. If you’re not a Mac user, well, you don’t understand. Also, you’re a goddamn heathen.

I have never seen anything like it. No other commercial product has quite the fanbase as Apple’s Mac OS X and iOS. I’m struggling to think of one…and I just can’t. I’m pretty sure such a product really doesn’t exist. If you do know of one, leave it the comments, because I would be really interested to know.

 

Anyways, that’s why I personally do not use Apple products, and why I think Apple products are generally inferior and not worth the hype. Of course, other people are free to disagree and continue to enjoy their Apple products. Not every Apple fan is crazy, and there are some legitimate uses for Mac OS X. (Namely, professional media creation. For reasons that I do not know, it seems that OS X is better at professional video and photo editing, although you can easily do these functions on a PC too. Mac just seems to be preferred.) But for the average consumer, Macs are needlessly overpriced and I can’t see them giving you too many bennies. In this day of variety and choice, as well, iPhones are exorbitant luxuries that should make you think twice about how much you really care about poor people.

But hey, to each their own. Me, I’m going to save that money and buy a Suzuki GS500F and flip you off when I blast past you on the street.