Not Just Siri Anymore: 6 Intelligent AI Personal Assistants

By Jesse Snider

Right now, millions of phones around the world contain an artificial intelligence known as Siri - a benevolent HAL 9000 analogue designed to follow vocal instructions and carry out basic tasks. Only Siri isn’t really an artificial intelligence (AI). She’s more of an Intelligent Personal Assistant (IPA), which, as it stands right now, is basically a more personable version of Let Me Google That For You. While she can respond to a very limited logic-tree-style set of commands, she’s more popular for her novelty than her functionality, giving rise to a wave of imitators and would-be substitutes. Even though Siri only takes up roughly 700 MB of memory, some users find other personal assistants much more appealing, and use backend methods to remove her to free up room on their smartphone. Now more and more devices are getting IPAs of their own, giving Siri some serious competition:

1: Microsoft's Cortana

2: Google Now

3: Samsung S-Voice

4: Blackberry Assistant 

5: Soundhound (Hound)

6: Amazon Echo


Read the whole story at our friends at

Projector Touchscreens are Here for Real

By Dia Ascenzi

Ten years ago, touchscreens were considered very high-tech, and they weren’t something everyone had immediate access to, not like we do today with smartphones. However, even they aren’t such a big deal anymore. We have devices that can project an interactive keyboard onto any surface, like this R2D2 model virtual keyboard. And technology is advancing exponentially, with new devices’ features coming out left and right. This year, Lenovo introduced a phone that can project an interactive touchscreen onto any surface. The Lenovo Smart Cast boasts the world’s smallest laser projection module, and has wall and surface settings. With this interactive laser projection, the possibilities include a computer keyboard, a piano that you can play in real time, with seemingly no lag, and potentially any other program that your phone holds. Soon, you may be playing games or surfing the web on the surface of your desk.

Read the full story at our friends at

Final Fantasy VII Remake to be Full HD with Unreal Engine 4

By Andrew Hendricks

Final Fantasy VII, first released in 1997 for Playstation and PC, is roundly considered if not one of the best RPGs of the 90s, certainly the one with the highest nostalgia factor for millennials.

While many game journalists were pessimistic to the possibility of a Final Fantasy VII coming to any handheld or next generation console, it appears Square Enix was quietly working on it all along, porting the nostalgia-filled pinnacle of 90s RPG-gaming onto a next-generation physics engine.

Many thought that FFVII was a no-brainer for a remake, and many were puzzled it took this long. Although the game boasted a uniquely entertaining leveling system, a compelling storyline, and a had great world-building, even in 1997 fans joked of the blocky character designs (outside of combat and cutscenes) as well as the need for four discs to play this huge game. Only two years later, Final Fantasy VIII and Legend of the Dragoon would be released on the PSone as well, both making FFVII’s graphics appear comparatively ancient.  

Square Enix drew the ire of fans and detractors alike when they revealed a snippet of FFVII rendered as a tech demo for the PS3. The tidbit was so tantalizing, the graphics so crisp, and the nostalgia-factor so high that no matter how Square Enix spun it, viewers couldn’t help but think it’s happening—it’s really happening! But it wasn’t. What was meant to be an artistic tech demo of the PS3’s graphics in 2005 ended up being a blunder resulting in nearly a decade of Square Enix having to constantly swat away rumors that a FFVII game was in the works any time soon. Its PC “re-release” on Steam, after Enix’s long-time refusal to cash-in on making the game easily downloadable again (what had been our only lingering hope that a remake was in the works), seemed to squelch the possibility that it was happening at all, let alone in the next year or two.

Then, continuing to confuse to fans, headlines began floating around that a Final Fantasy VII re-release was coming, but that it was barely a remake. PS4, iOS, requiring actual headlines such as “Final Fantasy VII Is Coming To PS4, But This Is Not A Remake” and derisive follow-ups pointing out that for some reason they improved the graphics ever-so-slightly, but didn’t remake the game: Final Fantasy VII PS4 Version Isn’t The HD Remake We Hoped For.

But have no fear, the real remake is coming, and full “it’s happening” status is certain. At E3 2015, Square Enix unveiled their remake, which sported a spectacular CG trailer and a cheering audience. And this trailer was finally not just a demo designed to make nerds cry — sporting Unreal Engine 4 instead of an in-house physics engine, Final Fantasy VII is going to be a unique type of revamped remake, and sported atop a tested and popular piece of software designed specifically for visually appealing games like Gears of War 4, Bioshock and Deus X.

“This is a bit of a surprise,” writes Venturebeat reporter Mike Minotti, “as Square Enix often uses in-house engines for Final Fantasy, its premier franchise. The Playstation 3’s Final Fantasy XIII used Crystal Tools and the upcoming Final Fantasy XV employs Luminous Studios.”

Minotti goes on to explain the obvious reason for this deviation from the norm: “In-house engines like Crystal Tools and Luminous Studios are often expensive and difficult to make, while using licensed ones like Unreal Engine 4 can free developers of much grunt work that comes along with a homemade system.”

Sporting actual voice attracting and “dramatic changes” to the battle system, Square Enix is hoping that they will be able to to re-capture gamers who may be burnt out having just played a version of the re-release so recently released. After all, just better graphics might not be enough to slog through the exact same random-encounters and boss battles. Who am I kidding, yes it would be. Developed by Epic Games, the Final Fantasy VII full remake’s release date is still “To Be Announced,” but here are some spectacular videos of the in-development footage Playstation has released to show us just how worthwhile the wait is going to be, and wait we probably must.

The game will be stylistically attempting to recreate the success of the CGI film sequel to Final Fantasy VII, Advent Children. Hoping to live up to these graphics as more than just a film’s graphics atop a nearly two-decades-old game, producer Yoshinori Kitasi warns fans that even with the Unreal Engine 4 usage, they will still be taking their time to make the HD remake a unique and satisfying game. Tamping down rumors of the game's release before October, 2016, Kitase was quoted by Siliconera magazine as saying  “I believe that this year will still be a year of preparations for Final Fantasy VII Remake. I’d like to create a new kind of value for the hardware that is the PlayStation 4 for our next announcement.”  

It’s happening. For real. Using Unreal Engine 4.




The Dark Questions Self-Driving Cars Raise

Google Self-Driving Car.jpg

By Michael Nurnberger

The “automobile” means “self-moving,” and in the future, this may well become the understood nomenclature if self-driving cars become the norm.  In this future, drinking and driving may no longer be a crime, and sleeping at the wheel will not be a issue. This is, of course, contingent upon self-driving cars taking you to and from your destination with a slurred “home,” or “work.” Google's safety record for their vehicles is excellent, considering the nature of their accidents. Fender benders  caused by people running into the self-driving car, usually at stop-lights, are the overwhelming majority of the few accidents so far, most of them when they are moving around five miles per hour. People appear to be prone to running into the back of the car. The worst accident, and only injury that has resulted so far, was in such a collision. In all of these, the most interesting incident occurred when two self-driving cars narrowly avoided hitting one another.

Self-driving cars are an objectively safer alternative to normal human driving, in normal circumstances. This much is clear. Google has even posted the accident that led to the minor injury, and it’s quite telling. If the world was to have only self-driving cars, it would be a much safer one. However, there are potential dangers to it, as well. In a terrifying experience, Charlie Miller and Chris Valasek hack a Ford Escape, disabling its brakes, jerking the steering wheel around, and forcing the brakes on.  As might be expected, being able to remotely control someone’s vehicle creates an entire new niche of car security issues when the car is doing the driving. A security bill introduced by Senators Edward Markey and Richard Blumenthal would hopefully help to protect consumers from data collection as well as the hacking of their vehicles by enhancements to vehicle security measures.

How will vehicles react to not only hacking through electronics, but attempts to exploit the self-driving program? This creates a problem in how self-driving vehicles will respond to certain quandaries. If someone runs in front of the car with a gun, how will the car react? This is an extreme example and hopefully one you would never see, but in this situation, the creators of this program could become liable if harm comes to the passengers of the vehicle depending on the response of the vehicle.

What if a situation arises where the car needs to be able to respond to stimuli and potentially cause damage to others in order to save the driver? This creates a liability nightmare of responsibility. If the vehicle strictly and narrowly obeys traffic laws, it can still fail through inaction. For example, if there is a carjacking, how does the vehicle respond? It can call 911, but will it let the danger supercede the breaking of the law? In a scenario where the human is in control, they might slam down on the pedal. The AI of the car has to account for a ludicrous amount of scenarios that might possibly happen, one chance in a thousand or a million, but which might occur. The liability lawsuits would come from every direction to exploit such a flaw. After all, who wants to purchase a vehicle that doesn’t take the driver’s safety as priority number one? But who would want to live in a world where autonomous cars (quite plausibly, if the right scenario occurred) would readily be willing to sacrifice a school bus to save their single passenger at the expense of a full school bus? There is no single “right” answer to these kind of problems, which is why autonomous car manufacturers like Google are quick to sidestep this morbid line of questioning in lieu of touting the (admittedly spectacular) safety record of their driverless vehicles.

For now, companies continue testing, and those tests appear to be going quite well, with the Google cars being made as safe as possible and as convenient as possible, and every accident being the fault of those that hit them. On the other hand, they also have the potential to harm, through either their action in judging situations or their inaction in dangerous ones. This is why you should expect more articles from ethicists and AI programmers alike imploring: “Why Driverless Cars Must be Programmed to Kill.” It is, of course, a morbid thought. However with technology able to mitigate the loss of life, it would be negligent of driverless car manufacturers not to attempt to solve this modern-day trolley problem.


Apple's New "3D Touch" Introduced In iPhone 6S

By Dia Ascenzi

Last month, Apple released yet another iPhone model; the iPhone 6S. While the average iPhone user might be inclined to believe that the newest model of the iPhone does not have any significant upgrades (besides the price), this new phone is a huge improvement on previous iPhones. Sure, it has a better camera and processor (making it faster than its predecessors), but the biggest change in this new model, is its new “3D Touch” feature.

3D Touch is a new feature that makes the iPhone 6S truly different than previous models. By using new technology that can sense how hard a user presses on the screen, the iPhone 6s touchscreen is the most interactive Apple device yet. With this feature, you can ‘peek’ at files and pages by lightly pressing on the iPhone screen. If you want to ‘pop’ into the page, simply press on the screen more forcefully.  Lightly pressing or tapping on the iPhone home screen, will result in a menu of ‘quick actions’ or frequently used actions. Pressing harder will execute the selected action. This new feature will revolutionize the way you use your iPhone touchscreen, making it both easier and more efficient to use and navigate.

Read the full story at our friends at

How This Young Redditor Created the Default Reddit Music Player

What started out as a side project became a little piece of the web that countless redditors have enjoyed using. 

"My story on how I built the Music Player for Reddit from the very start two years ago until I launched it on Reddit and became an overnight success."

Read about the entire saga at our friend's website on

An Expert's Guide to Good Password Hygiene

Everyone hates having to remember passwords from a decade ago to that new one you just created last night. Some requiring symbols, numbers, and alternating lower and uppercase can become infuriating. There are a few ways you can make it easier on yourself while making sure your password isn't embarrassingly crackable. 

Read the full article by our friend at

How to Secure Your Internet Connection

We all know to go into Private Browsing or Incognito mode when we're making suspect searches or shopping for birthday gifts (yeah, shopping). However, if you're really serious about securing your internet connection, there are a few other steps you can take, as well as making a habit of browsing with a VPN. 

Read the full article by our friend at

The ideal smartphone screen size--the history of their evolution


Years ago, a small phone was desirable. Now, most consumers won’t even bother considering a phone with a screen less than four or five inches wide. It’s understandable that consumers would desire a smaller, slimmer phone, when one takes into account the comically large “bricks” that represent the first foray into true mobile phone development. Companies scrambled to cram more capabilities into a smaller space, giving rise to the Blackberry and the flip phone.

From brick-sized mammoths that required an antenna to palm-sized flip phones with thumbnail screens, to the now widely used, borderline-tablet that is the Galaxy Nexus and iPhone 6, mobile phones themselves, let alone smartphones, have gone through quite an evolution since their first emergence.

Read the full article at our friends at

Smartphone Storage Scarcity — How Much Space do you Really Have?

By Andrew Hendricks

iPhone users who are gluttons with their data have again faced that familiar problem of an iOS update requiring more storage space. More storage space?! “I’m filled to the brim as it is!” While not exactly Sophie’s Choice, you don’t want to end up deleting something you’ll regret just to get the latest updates on your phone.

Read the full article at our friends at for four solid tips on how to maintain enough storage space on your smartphone. 

Google Glass a Legal Grey Zone

By Andrew Hendricks

As Google Glass has moved at a snail's pace from a trade show novelty to a real-world gadget, the friction between old and new has already resulted in some iffy legal questions.

Last year, we discussed some of the questions raised by a patent filed by Google, a dreaded "pay-per-glaze" model of verifying ad-views in a theoretical future, though at the time they had given no hint they intended to do so. Yet with the New York Times now releasing their headline reader app, the first official third-party, Glass-friendly app, Google Glass consumers have already had a few very public run-ins with the law, making what were theoretical questions something a curious public needs answers to now.

For those who wear Glass and drive, there has been at least one ruling that is somewhat clarifying. Cecelia Abadie, a California woman, was pulled over and charged with a traffic violation for wearing Google Glass. Abadie challenged the ticket in court.

"I was wearing it because I do wear it all day, but I was not using it," Abadie said in an interview for a local San Diego TV station.

Some states, West Virginia being the first, are drafting legislation to explicitly prohibit driving with Google Glass and similar products.

Abadie went on to complain that, "a lot of people don't understand how the device works... and the fact that you're wearing it even if the device is turned on doesn't mean that you're watching it or using it actively."

Cecelia Abadie was found not guilty in the traffic violation, essentially because even though a California statute forbids motorists from using recording devices not mounted (such as a dashboard cam) while driving, it's possible to have the device on and not record. As Google is already looking into prescription Glass, the ruling should give the company a sigh of relief, however app developers are undoubtedly frustrated at the implications of the app.

One of the original touted benefits of Google Glass was that it was not just a phone or TV screen that rested directly in front of your eyeballs, but rather a piece of sophisticated technology that augments your reality in a useful way never before seen outside of science fiction. For example, Glass offers warning and situational awareness augmentations to your daily life, say warning a jogger of a pothole or a driver of dead-stop traffic approaching. Wired magazine is already reporting on a Google Glass-type computer in development for military soldiers.

Thus a question is raised that the courts may have to address in the future. If there is scientific evidence to prove that an app made for drivers on a Google Glass (or similar type) device dramatically increases their performance, wouldn't you want all commercial drivers to use such a device?

It would be a difficult issue to untangle, particularly because of the ability to root devices such as this (similar to jailbreaking an iPhone and installing your own software). Even if there were some way to shut-off all non-GPS or vehicle-app functionality, it would be impossible for an officer to know at a glance if a driver was actually using glass. One could see how dangerous a driver using an unfamiliar third-party app while driving could be. Pop-ads are difficult enough to deal with on a smartphone screen without having to worry about a similar interruption while merging at 75 miles an hour.

Legal authorities are suspicious of increasing use of Google Glass, and not just for safety reasons. One AMC movie-goer found himself on the wrong-side of the law for wearing his device during a movie.

In this instance, management assumed a man with Google Glass was recording the movie he was watching. The man claims his device was in the off position, yet somehow, in a very quick turn of events, Homeland Security and FBI agents were called to the theater and hooked the man's device up to a computer, assuring the movie-goer that they knew he was a movie pirate.

In the end, AMC issued a lukewarm Mea Culpa explaining that while they are "huge fans of technology and innovation" (as they put it), wearing a device capable of recording to a movie is not appropriate. The MPAA was on-site at the time and, possibly despite the better judgment of a service-side industry, Homeland Security was called, as they now oversee movie theft.

One thing is certain, and that is that if you are a Google Glass user, it is best to be cautious. As new and improved models are developed for different purposes and different brands, we will see what different state laws say about specific use of the device. Only time will tell if eyewear computers will ever be accepted as an "everywhere" item no different than a cellphone, or if social pressure and legal restrictions will prevent these devices from reaching full cultural penetration.


Windows 7, ate, 9: Windows Jumps from Windows 8 to Windows 10 for No Discernible Reason

By Andrew Hendricks

Microsoft recently announced that their hyped, in-progress, “Unity-oriented” next operating system will officially be called Windows 10. Their reason for skipping over 9 is still unclear at this point, but we can imagine the answer lies somewhere between the lackluster success of the touch-oriented Windows 8, and Microsoft's desire for an easier marketing campaign.

Asked about the new OS and the jump to 10, Microsoft's VP of Operating Systems, Terry Myerson, said at the press event following the unveil: “Windows 10 will run on the broadest amount of devices. A tailored experience for each device. There will be one way to write a universal application, one store, one way for apps to be discovered purchased and updated across all of these devices.”

With all the mentions of “One” we will be hearing, it is almost surprising they didn't decide to throw out counting entirely and just call it Windows One.

Oh wait, already did that with the X-Box. 

Currently Microsoft 10 is set to release to the general public “later in the year” 2015. What makes the  Windows 10 name especially humorous is that it was an April Fool's article JUST LAST YEAR, with a headline that read: “Deeming Windows 9 'too good to release,' Microsoft execs shelve follow-up to Windows 8 and proceed to Windows 10.”

In saner branding and naming news, Microsoft announced that they would also drop the “Bing” name from their app store. Functionality will remain unchanged for these apps, but they will henceforth  be referred to as MSN Apps. While it may have been Steve Balmer's dream that Bing becomes a commonly used verb (it's half the syllables as Google!), at least Microsoft is now acknowledging that just like Gretchen in Mean Girls couldn't force “fetch” on her friends, no amount marketing is going to get even their own employees, let alone Joe P. Consumer to un-ironically utter the phrase: “I'll just Bing it real quick.”

If you are itching to see how Windows 10 stacks up against your current OS, “Technical Preview” Beta-testing for Windows 10 is currently up, and you can apply to be one of the first to play with and document your experience by going to the Windows website and signing up for their Windows Insider Program.

How Segway Succeeded Despite Themselves

By Andrew Hendricks

The story of Segway is an interesting one. A scooter and a company that once sought to revolutionize society and instead, settled into respectable company with name-recognition and over-all positive reviews. As someone who is regularly in San Francisco and laughs when the goofy-looking Segway Tourists zip past like joyful lemurs, however, it can sometimes be hard to think of Segway as a true success, and not a niche novelty.

Those of us who remember just how big the hype for Segway was may also have our opinions of Segway's success tainted slightly. Before Segway was Segway, the company behind the gyroscopically-stable electric scooter, it was “It.”


The new millennium was just beginning, and wide-eyed inventor Dean Kamen along with a handful of other tech backers (including the late hype-master himself, Steve Jobs along with Amazon's Jeff Bezos) were so confident of the impact their product launch would have, they actually thought this was a good marketing idea.

Codenamed “project ginger” throughout its development, mysterious advertisements begin to pop up across America about the revolutionary new project called “It” that would change the world. At the time, I was an 11 year old kid who has just seen the late 90's Keanu Reeves and Morgan Freeman action film, Chain Reaction which included the invention of a way to generate cold fusion. Let's just say as an 11 year old, my standards where high. If you say you're going to change the world, you better deliver!

A year later, when the Segway was revealed on Good Morning America to a less-than-uproarious response, I remember thinking it would be neat to own one, but I didn't see what the big deal was. The original Segway model did 12 miles an hour, used no brakes, and required the ride to shift their weight  and use a manual turning mechanism on the handlebars. Models have improved significantly, and are now so reliable that Apple Co-founder Steve Wozniak and other proud geeks have popularized the new sport of Segway Polo.

Though he may have misjudged what Segway's impact would be at launch, Kamen was right about one thing, and that is just how innovative his scooter was, and how much people would enjoy their experience with it. I personally have not met a single person who has ridden a Segway who wasn't overall impressed with the experience. Segway was a good product, but it wasn't the “It” of the new millennium. However, as obvious as it is in hindsight that no product will ever be good enough to merit an ad campaign predicated upon associating the product with the most common pronoun in the English language, the notion that Segway really was going to be a game-changer might not have been so ridiculous had it been introduced to the public in either a sensible manner or at a lower price. At its consumer launch in 2003, its original price was a whopping $3,000.

Just months after the unveiling of the Segway, a Guardian article spoke about the reasons behind Kamen's confidence in Segway:

“Mr Kamen has predicted that Segway will replace cars for short journeys, particularly in traffic-ridden urban areas, thus changing the urban landscape by introducing a smaller and more environmentally friendly alternative. Segway runs on electricity, with Mr Kamen claiming that six hours of charging time from a wall socket will power the scooter for 15 miles.”

Living in 2014, more than a decade later where fuel consumption is only barely beginning switching to energy efficiency and the electric car is only barely becoming more than a novelty... it almost feels like we have let down Segway's founder. Who among us hasn't driven countless, walkable errands when a more carbon-friendly scooter ride would have been just as easy?

But Kamen and Segway saw the writing on the wall and realized that their high-minded dreams of an all-electric society scootering as their main mode of transportation was not a realistic business model, and they innovated. From 2004 to 2006 Segway began partnering with a number of local police departments, security guard companies, and golf courses. The company's withering stock began to slowly rise, and in 2009 the then CEO of Segway retired and U.K businessman James Heselden took over as owner and CEO.

In a tragic irony, less than a year after taking the reigns, Heselden died in an accident while riding his scooter. According the a Wall Street Journal articled headlined  From Hype To Disaster “Mr. Heselden’s body is found, along with a Segway, after a witness reported seeing a man fall off a 30-foot cliff into a river about 140 miles north of London.”

So with a unique history and a brand almost a decade and a half into its lifespan, Segway still has hurdles ahead of it to maintain its market share and not fall into obscurity. What Segway needs in its future is what Dean Kamen, Jeff Bezos, and Steve Jobs all originally saw as the scooter's future: general adoption. It saved the company that security guards, mail carriers, tourists, and other organizations uniquely benefited from what Segway had to offer, however, until the average Joe considers Segway a transportation staple similar to a bike or car, Segway will continue to fight against novelty and obscurity. And it doesn't bode well for Segway that on their own website, the page which explains in bullet points why you should own a Segway says quote: “You're a trendsetter. You don't care what people think, you know you're cool.”

Thus, Segway is currently—on their very own website—admitting that their ideal customer is someone who doesn't what don't care what people think. Segway, stop shooting yourself in the foot! If you want the every-man to become a Segway user, and if you want us to ditch cars for in-town travel and save the planet, for goodness's sake don't advertise just how dorky you know they make us look!

Yet, as stymied and self-defeating as Segway's marketing has been throughout their history, they are not poised to disappear any time soon, and maybe, just maybe, we could all stand to be a little bit dorkier.

The YouTube of Gaming: Amazon Acquires for $1 Billion

By: Andrew Hendricks

 Move over Major League Baseball, competitive video gaming is the next big American past-time, and it looks like it's here to stay.

With the recent $1 billion acquisition of, Amazon has outbid Google at the last hour, in a move that has caused ripples in the gaming community. Known as the “YouTube of live-streaming gaming,” owned by Google,  YouTube was in talks to buy for the same billion dollar price tag back in May.

To a non-gamer, the success of may to difficult to fully understand. For those not interested in video gaming, contemplating watching others game competitively may sound like a boring proposition. However for millions of Americans, watching professionals battle it out and performers play through games they can't afford or can't beat themselves, watching another person stream their video game session is no different than watching a television show in their eyes. In some ways, it's much more like watching a sport.

Highly competitive video game play has been a staple of South Korean culture for nearly the past two decades. Some of the first world-wide video game celebrities were Korean, with Starcraft, and later   Starcraft II being the first games to really grow the competitive player and fan-base exponentially outside of Asia in both Europe and America. One person responsible for exposing Americans to Korean-style competitive gaming is famous YouTube caster  HuskyStarcraft. He became a mini-YouTube celebrity in the gaming community after gaining a reputation for non-stop enthusiasm in his Soccer-style sports-casting of 1v1 competitive Starcraft II match-ups. He still churns out videos to this day, casting both professional e-sports matchups and “noob-friendly” videos. Combined, HuskyStarcraft's channel has 4,568,397,180 views and nearly 900,000 subscribers!

Unlike First Person Shooter games like Call of Duty, or RPGs like Zelda and Final Fantasy, Starcraft II  is a game in which two opposing players (or two teams of players) each control a small base where resources can be gathered over time to build larger armies. Composition of your army (building aircraft vs. tanks, for example) and choosing when to attack are huge parts of the strategy for a game such as this. Without hundreds of mouse clicks and keystrokes needed at woodpecker-like speed, the movements of professionals at these types of games are often discussed in terms of Actions Per Minute, or APM. Simply watching the dexterity required to play at the upper echelons of these games, one can see how South Koreans view e-sports as legitimate sports—a sentiment that is catching on more and more in the USA.

Overtaking Starcraft II in America the past couple years has been the free-to-play League of Legends. While Starcraft II, a Blizzard Entertainment title costs between $40 to $60 up front for online play, League of Legends, a Riot Games production makes its money through in-game purchases. Unlike popular Facebook games like Castle Crashers and Farmville, which cashed in on users shelling out money to progress in the game, League of Legends ingenious difference was that although you can unlock extra playable characters with money, you can unlock everything needed to be equally competitive through game-play. The only content that you cannot “earn” without paying for in League of Legends are cosmetic “skins” which change the armor or outfits of different champions.

In 2012, League of Legends Season 2 Championships drew 8.2 million viewers. At the time, it was the most-watched e-sport event of all time.

Just last year, however, this record was more than tripled as Season championships drew 32 million viewers. To put this number into perspective, Game 1 of the World Series boasted headlines of its dominating TV ratings with 14.4 million viewers.

With new video games gaining niche communities everyday and popular games like League of Legends rising to the level of golf, soccer, and even baseball, it is easy to see why the big names in tech like Microsoft, Google, and Amazon would battle it out for YouTube of gaming.

Flappy Bird Flies Out of Control for Creator


by Andrew Hendricks

You’ve probably heard about the crazy rise and dive of Flappy Bird, a mega-successful iOS and Android device lauded (and loathed) for its simplicity, difficulty, and addictiveness. After revealing to The Verge in an interview that went viral, Flappy bird creator Dong Nguyen revealed that his game at the time was doing an average of $50,000 in ad revenue a day. 

This caused Nguyen to receive a flood of messages, many criticizing him for his game’s concept art having designs very similar to the Mario franchise (the green pipes bear a striking resemblance) as well as calls for him to make the extremely difficult game easier.

In an interview with Forbes, Nguyen tried to set his side of the story straight by explaining the sole impetus for removing the game was his distaste for the addictive quality of the game: “Flappy Bird was designed to play in a few minutes when you are relaxed.” Nguyen said. “But it happened to become an addictive product. I think it has become a problem. To solve that problem, it’s best to take down Flappy Bird. It’s gone forever.”

Yet if this were entirely the case, one wonders why even after pulling Flappy Bird from the Google Play and Apple App Store, ads for previously downloaded games and services were still being updated. Remember that removing an app from the play store does not mean no one can play the game anymore. It simply means that you can't download a new official copy of it. Nguyen is still drawing in ad revenue based on these ads that are still playing on devices. And now that Flappy Bird is a manufactured rarity, the game has never been more popular, at least for now.

Popular video game commentator, podcaster, and content-producer Burnie Burns of Rooster Teeth echoed this sentiment in a recent podcast: “Not only is Flappy Bird still running ads, I have a new ad, today. So while he is saying ‘I wanna go back to my simple life, this game has ruined me, and they’re overestimating my success [...], but he is still running ads on the installed base of 16 million copies. That he hasn’t turned off.”

Though Nguyen has stated explicitly there was no threat of litigation from Nintendo and that it did not factor into his decision to take down the app, some have speculated Nintendo might still have proved litigious down the road, and would only be more motivated to do so if the game were sold to an entity with deeper pockets.

Taking a look at Nguyen’s infamous twitter post does lend an air of credibility to Nguyen’s side of the story. From death threats, to requests to add an easier mode, and people begging for money, by revealing the monetary success of the game without a media team to help him, Nguyen realized he had made a big mistake. $50,000 a day (even if that is an inflated estimate) is definitely a windfall amount for a developer in Southeast Asia. One could see how the sudden spotlight and overwhelming amount of emails, messages, requests for interviews, and phone calls would be a little too much to handle. However, regardless of Nguyen’s real motivation, he has managed to separate himself at least somewhat from his brand (in a business move totally opposite of Angry Birds and Clash of Clans) and made himself the story, even if it was the exact opposite of his intention.

Very few mobile app developers are recognizable figures in the video game world. Tech journalists have speculated that whatever Nguyen puts out next will be the hot new download--and he hasn’t even hinted at designing anything yet! For someone who would be relying on downloads and ad revenue for whatever new game he puts out, Nguyen may have stumbled into the spotlight unwillingly, but by removing a product he thought was sub-par and/or harmful, Nguyen has done what countless publicists could only dream of.


'Pay-Per-Glaze' Future of Google Glass Ads?

By Andrew Hendricks

We see more of them now—walking among us. Their gaze far-off. Their skin, pale and clammy. We can only remember seeing them once, in the distant past. But now, closer to urban areas, we notice one almost every other day. At least every week. We know there will be more of them. We know soon we will be one of them.

No, I'm not talking about zombies or vampires, but those similar oddities, early adopters of technology who have taken up wearing Google Glass in public.

We all know that advertising is what pays the bills behind the scenes with many major corporations. Yet we are all terrified of what the future of unrestricted advertisement could be.

The sentiment brings to mind a popular Futurama episode where the main character, Fry (a Pizza delivery boy, cryogenically frozen and defrosted in the year 3000) has a strange dream that ends up extolling the virtues of these sexy red space underwear briefs. He tells his coworkers about this with horror, and is met with blase shoulder-shrugging. Companies beam dreams into your head.

"Big deal. It's old news," they tell him. We later see all of them at the mall and Fry has tried on the underwear.

So with Google Glass the new biggest product on the horizon and the biggest question mark since the iPhone, it is understandable that the potential of such technology raises both advertiser eyebrows and end-user concerns.

Already Google has set privacy advocates all a-twitter regarding a recent story published by Business Insider about a “pay-per-gaze” patent Google has filed. Essentially, the patent lays out potential use of the technology where users could be charged every time an advertisement caught their eye by tracking pupil dilation and where the user is looking. As if pop-up ads weren't bad enough.

Of course, it is important to remember that patent technology and a company's intentions are two very different things. Even if Google's motto wasn't the refreshing “Don't be evil,” such a company is not ignorant enough to forget that in the internet age, pitchforks are never in short supply, and such a move would be very bad for business.

However, the fact that such an idea was discussed does raise concerns for even privacy advocates who can't be greater fans of Google. There is no Apple or Microsoft product that is now competing with the Google Glass, however with such a revolutionary and potentially useful device, it is impossible that other companies would not attempt to replicate Google's success, if it is in fact, successful. One should certainly fear less reputable vendors and manufacturers acting on ideas such as Google's “purely hypothetical” patent.

And yet, there is reason to hope for the future of marketing through computational eye-wear when one realizes that successful advertisers of the last decade have been those that make innovative use of new technology, rather than those who try to pull one over on their users. Think QR codes for smart-phones. Social media presence that adds value and interacts and gives prizes to customers. These have been the success stories, and there will be such stories for companies that can get in on the ground floor with Google Glass, or whoever Glass's first competitor will be.

Perhaps such successes will come in the form of companies that add a fun element to the real-life environment-integration Google Glass gives. Imagine billboards that merely looking at sent the Google Glass user an email for a buy-on-get-one-free offer. Imagine being able to load ads ahead of time, and have your bill reduced by a certain amount by however many ads you choose to watch, when convenient for you on a train or bus.

The possibilities are endless. As of now, the few apps currently available have not strayed much from the current paradigm of web and mobile web advertising. Google is well-aware of the concerns users have about etiquette and privacy issues that come with eye-glass computers. We can only wait and see what sort of advertisement users will be comfortable with, and what big ideas will completely change the landscape for advertising in this new arena.




Windows H8

By Andrew Hendricks

Microsoft’s new Operating System Windows 8 has been met with a flurry of confusion and condemnation. Between the “lack of a start button” and unusual Metro, touch-oriented start display, critics from computer novices to tech junkies have lined up to trash Windows 8 as inferior to its predecessor, Windows 7.

Windows 7 is a good OS, don’t get me wrong. It’s easy to see why people like it. It’s Windows XP (everyone’s favorite OS) updated, without a lot of the crap that came with using the notoriously buggy Windows Vista. I’m not criticizing Windows 7. It was the last, best OS that used the style Microsoft is so well known for and has been since it forsook DOS for desktop computing. Even then, many die-hard programmers hated the watered down use of a graphical interface in place of the raw power of command line inputs—for anything other than games of course. But we know who won that battle, and now every OS must have a desktop.

”Yeah, and where’s my desktop!” you say. “Where’s my friggin’ start button?” Apparently Microsoft had no idea the horrible sin they were committing by messing with the almighty start button. The current version of Windows 8 uses a “Metro” style with a “start bar” of icons rather than immediately loading a desktop. You can, however use the desktop with your regular view by toggling between it with the windows key.

The start screen may seem a bit useless to PC users at first, and may remind some gamers of the Xbox’s load screen. Window’s phone users will be even more familiar with the style. While I admit I didn’t use it much at first, I have learned you can save a lot of time by deleting the apps you don’t want from the start bar and pin your favorites. Customization is a key feature in Windows 8, and a feature I found fun to play around with. It is understandable why less computer-savvy users would freak out from the loss of their go-to start button, however it is not even gone. It is in the exact same place, it’s just hidden until you mouse-over the bottom left corner.

Some more tech-oriented friends of mine have pointed out their hatred for it is simply because it takes more clicks or more time to do simple tasks than Windows 7. In some cases, similar movements might take one or two extra clicks, however I’ve found that most of the complaints come from people trying to figure out how to do a task, rather than complaining about how difficult performing a task actually is on Windows 8. Taking ten seconds to just Google any question I had about the OS, I found myself not once frustrated or unable to do what I wanted. I prefer an OS has the most streamlined functionality and performance. I don’t require it to read my mind. My only main complaint is the default full-screen nature of apps like photo-viewing and PDFs that you cannot change by default. 

What impressed me most was that Windows 8 doesn’t treat me like an idiot. My most hated “feature” of Windows 7 was that administrator rights were hidden from you. I don’t like it being assumed that my system needs to be protected from me, especially when the system doing so prevents me from doing maintenance. Some viruses can’t be removed with some modern spyware without administrator rights. To activate the administration account in Windows 7, you actually had to go into the command prompt box and type in a string that unlocked it upon login. While it’s cool to occasionally get to feel like a programmer, a good OS should never force the user to go into the command prompt to do something absolutely reasonable like wanting to be the admin on their own system. Windows 8 tries to be noob-friendly with frequent-use apps in bold start bar display, however they don’t try to hide utility from users who have some idea of what they are doing. When trying more advanced tasks, a good rule of thumb with Windows 8 is, when it doubt, right click. You’ll find the options you are looking for.

Windows Defender is also a built-in function (which is 8’s version of the popular, free Microsoft Security Essentials), and under-the-hood utility is easily attained by simply right clicking the bottom left of the screen (invisible start bar) while on the desktop. Right-clicking any blank space while in the start menu gives you an option to view all your apps. You can still access your files, doc, computer, and control panel as normal. Windows 8 doesn’t get rid of all the good stuff that works, it merely streamlines its style and utility that, quite frankly, we’ve all been criticizing Microsoft for not doing sooner.

We all love Mac OS for its simplicity in its filesystem while still being easy-on-the-eyes. Microsoft is stepping into a world with their new OS where potentially half of their users are going to be on touchscreens. The clunky, start-button oriented OS of Windows past is simply not a feasible option on phones and tablets, and I frankly wouldn’t enjoy trying to use it as one. Yet, still using Windows 8 on a PC with a mouse, I am blown away by how useful it is. There will always be those who criticize anything new that changes what they are used to, and this too I understand. Windows 7 is still a great system, and if it works for you, you probably won’t need to change for a while (hell, people are still using XP). But from a technical and utilitarian standpoint, Windows 8 is simply much better than the press it has gotten. This OS will be the model for Windows operating systems for years to come. And we should be happy that’s the case.