Machine learning scientists don’t use time series?
After a deep discussion with GraphLab architects I was gobsmacked to realize ML scientists don’t use time. They hold the notion that introducing real-time feedback will destabilize the system. I call utter bullshit. Of course the system will destabilize if you don’t apply control theory to it.
ML = f(x) where x is “big data” and batch (not a time series)
RT = f(x) + f(t) where x is small data and t is time series (will adapt)
Another cop out is we can’t scale ML on a cluster because we’ll have multiple models (one in each cluster). Again hog wash, sharded models can be tested in real-time for stability and pushed out in near real-time to the shards (github meets devops). How do ML data scientists think the Israelis do the Iron Dome?
The RAF sent the Dam Busters to aim and fire Barnes Wallis bouncing bomb because no amount of “big data” can correct the algorithm without real-time feedback.
Net-net: Ever wonder why ad-tech blows so badly, why stupid ads you’re not interested in follow you around, even if can you give feedback to stop it?
Apple Pay could help usher in a new era in mobile payments, and not just for its own wallet.
"The fact that it is still possible to use customer service or an automated system to change someone else’s PIN with just the cardholder’s Social Security number, birthday and the expiration date of their stolen card is remarkable, and suggests that most banks remain clueless or willfully blind to the sophistication of identity theft services offered the cybercrime underground, cybersecurity journalist Brian Krebs wrote in his analysis of the recent Home Depot theft.”