29 9 / 2014
The Big List Of Stuff
Earlier today I gave a ridiculously fast-paced (and in hindsight way too compressed and intense) talk at the /dev/world conference in Melbourne, Australia - outlining some of the tools and libraries I use to get stuff done. I told the audience I’d publish a big list of everything, so here goes my first attempt. I’ll add to it over the next few days.
The First Set (Terminal and environment)
The first things I touched on were package management, bash, environment maintenance, exports and a few other things to make using the terminal and navigating the terminal / along with installing software a lot easier. Here… we… go…
Package management for OS X. Incredibly easy to use, community managed and maintained. Worth its weight in gold. Homebrew
Heralded as ‘bootstrap for your terminal’ and it truly is. A great foundation that provides heaps of customisations and a lot of things that make using terminal a joy. Source is available. Bashstrap
Z.sh (jump around)
Keeps track of directories you jump to in terminal and lets you jump around easily with fuzzy logic and guessing. Worth its weight in gold, well given that it’s weightless code, worth far far more than its weight in gold. z.sh
Keyboard shortcuts to trigger a term anywhere and everywhere. In a finder window and need a terminal? Bam, hit the Dterm keyboard shortcut and you’re away. Absolutely fantastic: Dterm
Dotfiles and exports
This was more a basic overview of some great aliases and things to bang into your dotfiles and exports, I’ll flesh this out as necessary but will recommend that people read Craig Hockenberry’s intro to terminal The Terminal
OS X Apps
I also overviewed a few great OS X Apps that I’d heartily recommend people use if they’re developing. I’ll preview a few here and if you have recommendations send me a tumblr message and I’ll throw more info out as necessary.
Patterns: Regex done right
Regex validation with live REPL and feedback. If you’re trying to validate or build complex regexes this app is absolutely a go to on the mac. Worth every cent Patterns: Mac App Store
Rested: Repl for REST
Absolutely dead simple REST client with heaps of fantastic customisations that lets you test APIs be they your own or third parties. Helps you validate your methods or ideas before you start codifying them in Objective-C. I use this almost every day when I’m dealing with services, and let’s be frank, we’re all going to be dealing with services in one way or another. Rested: Mac App Store
CodeRunner: Run and Validate
Fantastic way to test out Obj-C code (Foundation only, tho), Ruby, Bash, Python, Perl, etc. A great thing to have in the repertoire to be able to quickly validate and test snippets or logic ideas. I use it very, very frequently to validate NSDateFormatter parsing and functionality when I’m trying to do something a bit more complicated. Worth every cent. CodeRunner: Mac App Store
Base: SQLite Editor
Yeah, sure you could jump into the SQLite console if you wanted to mess around with SQLite DB’s, but if you’re a developer you’re going to end up having to deal with or open a SQLite database at some point. I use it very very frequently to test SQL queries, or to pull apart Core Data databases when I’m having issues. Fantastic piece of work and this post doesn’t even scratch the surface Base: Mac App Store
Soulver: The calculator apple forgot
I can’t give Soulver enough praise, I use it again, almost every day, particularly when doing rect or frame calculations or any other kind of mathematical garbage I have to do on a day to day basis. Doesn’t just do basic maths, is incredibly intelligent and does a TONNE of excellent, excellent things. Get it. Soulver: Mac App Store
Dash: Docs and Snippets
Dash makes it incredibly easy to store and browse documentation from almost every documentation source on the net. It’s particularly good at parsing iOS documentation and is a heck of a lot easier to reference and refer to than the built in documentation viewer. I thoroughly recommend spending the money. The second other feature that most people don’t mention is the excellent snippet library support. Absolutely indispensable. Dash: Mac App Store
One of the other things mentioned in the talk (and I will make the slides available at some point, tho they don’t make much sense in context) was a few must use cocoapods that again, I use on the regular. If you’ve got any other suggestions, let me know!
Lets you turn a Hex string into a UIColor. No muss, no fuss. No messing around. Cannot stress how useful that is. AVHexColor Github
Fantastic multi-faceted logging system with multiple output support and built in log rotation. You can even do excellent things like set up remote logging to things like logstash and kibana. I have that in my default podfile on every app I build. Cocoalumberjack Github
I spent a lot of time in my talk giving mantle a lot of love and I’ll put together a broader blog post at some point about how I use mantle to rapidly prototype and validate ideas as well as consume weird and wonderful APIs for a chunk of the enterprise work we have thrown our way. Mantle has saved my life and helped me bring projects in that would have otherwise blown out on several occasions. It’s another library that’s always in my Podfile and it’s been battle tested by the folks at Github so you know it’s the goodness. Mantle Github
AFNetworking : Seriously
This was more included as a bit of a joke because everyone and their mother uses AFNetworking (made by the incredibly prolific mattttttttttt). If you don’t want to just go NSURLSession/Connection, give AFNetworking a go. AFNetworking Github
Alcatraz and XCode
Alcatraz is a package manager for Xcode. It makes discovering, installing and managing XCode plugins an absolute breeze. (I gave a shoutout to the excellent Tony Arnold in my talk when I mentioned Alcatraz). Install it and then install the plugins I’m about to list. Alcatraz Homepage
Allows you to autoformat your code using any number of code style plugins. Uncruftify your code automatically. I’m personally a big fan of the LLVM style guide and the Webkit style guide. Nobody needs to see poorly formatted code. ClangFormat Github
Sublime style minimap in Xcode. As useful as you could imagine, makes navigating particularly long source files very, very easy SCXCodeMinimap Github
Gives you an autofilled statement when you load an image from your app’s bundle with imageNamed:, including a preview of the image. Incredibly helpful. KSImageNamed Github
ColorSense for XCode
Gives you inline previews of UIColor, NSColor. When paired with AVHexColor it makes figuring out what your appis going to look like in production a breeze (no more messing around with the compile, run in simulator loop). ColorSense Github
I also mentioned a few other things in my talk, but I’ll focus on one in particular, and that’s the Chisel library from Facebook
Chisel is Facebook’s customised set of LLDB commands and my god they are useful. bmessage alone has saved me hours upon hours of time in the debugger. Install it and watch the video about using it Chisel Github
I’ve also included a gist of a modified form of my TableView GenericDatasource/Delegate that I wrote in swift on my way to the airport. Use it as you see fit, it’s pretty dumb code but I use it semi-regularly for prototyping. Swift TVDS Gist
I am hoping to have the slides up in a few days, but I’ve taken the time to reflect on the talk and honestly think it needed a substantial amount of editing and paring down. I know I covered a lot of stuff and a lot of philosophy in the process but I feel that the huge amount of work I wanted to cover to explain how vertically integrated and dependent a lot of the tools that we use are fell a bit flat. I think were I to do it again I’d focus on breaking the talk up into about 5 separate five hour talks and give each section the coverage and love it deserved. In keeping with this I made a few throwaway mentions to generic code I’ve used and a few patterns / tips / techniques that I’ve used to make building stuff faster a bit easier. Depending on how my workload looks over the next couple of months I’ll put together a few blog posts about the things I touched on briefly and maybe next year collate that into something a bit better resourced, a bit less intense and something that’s way more friendly to people who aren’t completely mad software engineers. Still, it was heartening to see so many people tolerate the ridiculous storm of information without giving up and walking out. I do hope that some of this information can be of use to you.
13 9 / 2014
17 8 / 2014
"Waking up in the morning is easier if you have someone to wake up for."
06 8 / 2014
With all of the talk about ‘metadata’ data retention schemes for internet history, we need a straightforward and easy way to understand what metadata is.
Data’s pretty straightforward, let’s look at it in the scope of a web request. Say you’re logging into facebook and you look at a picture that a friend has updated. The content of that picture is the data. The digital contents of the file that mean the file can be displayed on any computer which supports decoding that file is the data itself.
Metadata is the data that’s generated when you view data, or when you access data.
So when you visited that facebook page you probably hit a URL (the addressing system for system on the internet, for the most part). The metadata created was that you hit that URL, at that time, who the person was that queried that URL (your IP address, which your ISP can easily correlate with who you physically are), and any other information, such as the size of the request (which could be used to intuit the kind of information transferred). So in terms of a basic facebook request we’ve now got
- The time of day the request was made
- basically who made the request
- How big the request was
- Any other resources that were related to that request (such as files that tell your browser [the thing you’re using right now] how to display the page you’re looking at).
This is a basic example of the kinds of metadata that can be generated.
What can just the collected metadata let us discover
Let’s jump straight to a more ribald example. Let’s say you’re unwinding and relaxing and want to watch some pornography. You browse to your favourite website, do your thing and then afterwards decide to watch some funny cat videos. With the metadata, any individual with access to that info would basically be able to figure out how long you choked the chicken or flicked the bean and what to. So if you’re into stuff that is your own kind of kink, just the data about the data gives whoever is reading that metadata log (be they an ASIO officer, RSPCA officer, someone from the local council or a russian or chinese hacker) information about your go to woe time and the kind of shit you’re into.
Okay so how can the fact that I look at porn be used to fight terrorism
First off, it’s a furphy to say that data retention can fight terrorism. Data retention would provide a mechanism for security services or police to access your browsing history and use it to prove a case against you for a particular crime you are accused of. Without data retention there are still, surprisingly, people who are accused of and convicted of crimes and who are dealt with accordingly by the justice system. People who have been planning (or attempting to plan) terrorist attacks on Australian soil have been detected and caught through old fashioned policework. This data retention information provides another source of evidence for police to build a tighter case. Data retention in and of itself does nothing to ‘fight’ terrorism.
But wait a minute, if they have all that data, won’t they be able to see what terrorists do and get them before they commit a crime
That’s the rolled-gold claim of the software vendors and the security hawks who peddle these pieces of software. Through a few complex algorithms and some number crunching we can detect criminal intent before it actually happens. The thing is the technology to correlate between actions and intent has been around for a while, and if you’ve been weirded out when google ads track you around the internet and show you ads for things you’ve searched for, you’ve seen this technology in action. When you’ve then been doubly confused that google is showing you an ad for something completely irrelevant to you or you don’t need, you’ve found something even more interesting, a misfiring of the algorithm or an ‘overfit’. This is where stuff gets very interesting and also very, very scary. The government (or, let’s be real, Palantir or whatever other contractor manages and performs dredging on this vast dataset) could set certain red flags or websites that when visited, trigger something to happen. That trigger could be to log more information about the request, it could be to look for more requests like it, it could be to add flags to a person’s file.
So what, shouldn’t we be keeping an eye on terrorist websites?
It depends on what you’re trying to do, if you’re trying to catch people that are either researching terrorism or documenting terrorist websites then sure. If you’re trying to combat terror, this mechanism is only going to work once. Terrorists are engaged in what is known as ‘asymmetric warfare’, they don’t play by the existing rules which is why it’s particularly difficult to combat them and sniff them out. The war on terror has also shown they’re incredibly capable of adapting to whatever we throw at them. If data retention was purely about combating terror, then we wouldn’t have had the broader slip by Tony Abbott today (6/8/14) saying that retained data would also be used for other ‘law enforcement purposes’. These ‘law enforcement purposes’ are already incredibly broad, and not constrained. As this article in the telegraph shows, more than half of UK councils are using extraordinary powers under anti-terror laws to spy on people who… don’t use their bins correctly.
Okay but that’s an extreme and ridiculous example, with oversight this stuff can be used correctly
Systems aren’t perfect and people certainly aren’t perfect, as someone who has worked with software systems in secure settings for years, stuff slips through the gaps. The good thing is we don’t have to deal with the case that it’s an impenetrable system with limited access to a few people, Abbott himself said it would be used for other forms of criminal investigations, so there are going to be multiple points of access (or multiple tiers of access) to the system. With those multiple tiers come multiple points of failure that could be abused, leading to massive privacy breaches.
But what about measures needed to keep our country safe?
As of the 6th of August, 2014, there’s only ever been one fatal attack classified as a ‘terrorist attack’ on Australian soil. That was the hilton hotel bombings. That attack also happened in 1978. As addressed earlier, data retention will actually do nothing to keep Australians safe and in many instances the false positives created by systems scanning for ‘behaviour patterns’ will waste the time and energy of the security services and potentially allow people to slip through the cracks. There are obviously two broader concerns here, the first is do we want to live in a society where such mass warrantless surveillance of our citizenry is a mundane fact of everyday life? Do we trust the government (or really, whomever they contract in to handle this) to securely manage our browsing histories [keep in mind it’s not just browsing, it’s everything that uses the internet, but that’s a point for another day] and make sure they’re only ever used ethically and when absolutely justified? The thing is, we already have a system with stringent checks and balances that works to protect privacy and make sure data is being used legitimately, they’re called warrants and the police, ASIO, ASIS and others use them every single day. Again, data retention is not about fighting terror, data retention is about something much bigger. Data retention is step one towards substantial internet control by the government. With legislation mandating that ISPs put in place infrastructure to snoop on and retain internet traffic, it’s trivial to say, hand over the data of those engaging in alleged piracy to copyright agencies to institute a three strikes system. With a system in place to monitor and record internet traffic, it’s trivial to institute a blacklist or a whitelist carte-blanche internet filter. The data retention play isn’t about data retention, it’s about the government wresting a greater degree of control over how we use the internet and putting into place a system of mass surveillance that will almost certainly be misused to the detriment of many Australian citizens. But there’s one final thing that makes the entire data retention play a complete furphy
How easy is it to bypass metadata collection under a data retention regime?
It’ll take you less than five minutes. Go to a website like easyvpn or strongvpn, sign up for a VPN service and follow their super simple instructions to route all of your traffic across an encrypted channel that can’t be snooped on. The metadata the government will see if you put all of your traffic down an encrypted pipe? The size of what you’re transferring (maybe, it depends on what kind of inspection they do) and that you’re connecting to a VPN. So unless they want to make using a VPN service a crime (which again, is probably feasible), the entire data retention regime is easily defeated by johnny jihad who can get back to plotting his war against the great satan. The best thing about those VPN services? They don’t keep logs, so when the cops come calling after a month long process to compel access to the information, there’s nothing to hand over. This point in and of itself completely explains why data retention is an absolute farce and is in no way a deterrent to terrorism.
03 8 / 2014
28 7 / 2014
26 7 / 2014