Mobile vs. PC, Where We Are and Where We’re Going

Let’s get real for a few minutes.

Believe it or not, PCs serve an important purpose: to get important stuff done. Think of it like semis vs. cars. One one end, you have something that gets you from A to B. On the other end, you have a hulking machine built to haul 10 tons or more. They’re huge, ugly, and burn energy, but they do what a car simply won’t do in its form factor. As much as I would love to have a PC that fit in my pocket, the reality is this won’t be happening for a very long time.

I built my desktop to be a stuff-doing station. It has 16GB of RAM, an 8-core 64-bit processor, a video card that cost an embarrassing amount of money, and banks upon banks of peripherals. Shrinking down this computer anytime soon would be impossible. I mean, the video card alone is probably the size of around 15 iPhones. Multi-terabyte drives are the size of 3 iPhones. Running on ARM could probably shrink down the other internals, but we’ve already got a huge problem for real-world work if we can’t manage to shrink down storage and video cards at a decent price.

What do we need that stuff for? Well, let’s look at the PC audience. Gamers, they need those video cards, maybe even 2 or 3 of them. Video editors, they’ll need a good video card as well, plus RAM and serious HDD space, not to mention speed. 3D artists will need a rendering farm (the cloud won’t do), specialty video cards, and loads of RAM. Graphic designers mostly need adequate screens, but they also need adequate processing and human interfaces. Businesses simply need adequate interfaces and access to Office. Engineers and scientists also need adequate interface and processing power. Desktop computers are massive, hot, and loud, because, well, they’re doing some pretty hardcore stuff. The wonderful thing about the mobile revolution is we no longer need to use them for basic tasks like web surfing and social media, but there are people in this world that need them to do their jobs.

So where do I see things in a few years? I think we’ll still see both devices, and I think we’ll finally see real docking stations, but I don’t think we’ll see actual tablets or smartphones being used as PC replacements. This might possibly work in the business world (and it would be kinda cool and probably seal the deal for BYOD), but smartphones still have a ways to go before you can use them for other purposes. Eventually things will miniaturize, but who knows if they’ll catch up to the raw power, customization, and upgradability of a desktop. We really need a way to dock our devices together in a synergistic (I hate that word, sorry) fashion to share data efficiently, but it seems like a lesson in futility at this point to try and replace the desktop completely, especially if your peripherals will be taking up lots of space anyway. I mean, a tablet won’t be replacing your mouse and keyboard in production (if you disagree, you’ve never designed or developed anything in your life). A tablet won’t replace your dual monitor setup, or your RAID, your subwoofer, etc. so what’s the big deal about having a box to better process your serious work? Desktops aren’t just a box, they’re a human interface and peripheral setup that has stayed consistent for decades for good reason.

So I touched on modularity being absent on smartphones, so let’s talk about the recent social media interest in the Phoneblok concept. I think we’ve all had this idea at some point in time, and it really is a great idea. Obviously there are some engineering challenges with what they portray, but that’s not to say it’s impossible to do. If good ideas were easy to engineer, we’d run out of good ideas. That said, some of this concept is pretty silly. Some thoughts of mine.

– There’s no shame in merging similar functionality. Wifi, Bluetooth, and NFC should probably be merged into one block. I mean seriously, when was the last time your Bluetooth broke on your phone? Heck, when was the last time you used Bluetooth?
– The block and pin idea would probably be flaky. They should have gone a different direction, like PCI card slots… rails and a pinout on the end. This would make engineering much simpler and avoid having the phone burst into 10 pieces if you drop it.
– Simplify. Instead of trying to modularize everything, just have slots for where things will go. Have everything held together by a battery on top, then have 1 CPU slot, 1 big card slot, and 4 small slots. That way you don’t have to tear your hair out second-guessing the user’s configuration. A battery and a CPU have no hardware interface commonalities.
– Don’t pretend this is greener. In all honesty, the Phonebloks idea might actually increase waste, and it will certainly up the manufacturing costs.
– In the real world, most of these things are crammed into a single board or chip. This is how they are able to make the devices so cheap and fast. Separating everything out means a headache for the manufacturers of these chips, and the engineers that have to deal with the interfacing.

I’m willing to bet someone will eventually try to tackle this problem, but it won’t be something from Kickstarter. Just my two cents.

Take Care of Your Phone

Image of MyTouch 4G

It plays Angry Birds. That’s all that was important in 2010.

This phone is three years old. Don’t believe me? Let me tell you about this phone.
I got it in Christmas 2010. I had decided not to buy a smartphone until 4G rolled out, and this turned out to be an excellent decision. This phone was the absolute top of the line back then– Android 2.2 (later upgraded to 2.3.4), 1 Ghz single core, MyTouch 4G (made by HTC). It is starting to show its age, hardware and software wise, but physically it looks brand new.

What’s my secret? First off, let me tell you what I didn’t do. I have never once used a screen protector on this phone. I have never used a bumper case of any kind. I’ve carried this phone in my pocket every single day for 3 years, and there’s barely a scratch on it, and the screen is perfect. All I did was follow common-sense notions of how to treat a smartphone, basically. Stuff like, I dunno… don’t put it in the same pocket as your keys; don’t put it in your back pocket; hold onto it firmly at all times; keep out of reach of pets and children. Do these things, and you will not need any extra crap tacked on to your stock phone. Treat it for what it is, a $500+ fragile box of awesome, and it will treat you well in return.

All that said, I really need to upgrade to something new. At this point in time I’m considering a Moto X, a Galaxy S4, a Galaxy Note, or an iPhone 5C. But if I wait until the holidays, I could probably Get a Galaxy S5 by then (64 bit, I bet). Basically I want something with a massive screen, newest Android, and excellent processing power.

So I was kidding about the iPhone, of course.

Thumbprints are not Passwords

After every Apple keynote, you can always expect to read the same type of sensationalist article. You know, the “Is this the end of _______ as we know it?” Well, iPads didn’t kill PCs, Siri didn’t kill Google search, and thumbprint scanners will NOT kill passwords.

See, biometrics aren’t really in the same vein as passwords; they’re really more of a supplement, or a verification of a person’s prescence, like a PIN number or SMART card. They’re not trying to be passwords, they’re something else entirely. They’re just trying to make life difficult enough for people who aren’t you, or at least difficult enough without a specialized computer. Even a 6 digit PIN only has 1,000,000 combinations, which would take a computer seconds to brute force, but that’s beside the point.

The point I’m going to make is that biometrics have similar flaws that don’t make it suitable as a password replacement. In all fairness, replacing a PIN number is really all Apple is trying to accomplish at this point, and I think that’s great, so it’s the media that has it all wrong.

So what’s wrong with thumbprints compared to passwords? Just off the top of my head…

  • Thumbprints cannot be changed, revoked, or reset. If someone knows your thumbprint hash, you’re out of luck, forever. If you are concerned about the NSA, this should terrify you.
  • You are limited to 10 thumbprints. And even then, you’re not going to remember which finger you used for what website.
  • They technically aren’t replacing a password, A thumbprint is read as a hash and stored as a shadow password. It will especially work this way for websites.
  • The thumbprint reader and software introduces failure points. A hacker could take control of these systems and force a certain hash without scanning a thumb. A man in the middle could read the hardware interaction and simulate it later.
  • Thumbprints aren’t secret. You leave traces of your fingerprints on everything. Especially mobile devices.
  • Technology exists to “lift”, analyze, and reproduce said fingerprints, and it will only improve with demand.
  • Some people have stubborn fingerprints. Me, for instance. When I worked for the University, we went through several thumb and handprint timeclocks the professors had to use. Those machines hated me. One of them would routinely make me scan 8-10 times before it would get a match, while it worked fine for everyone else. I think it had to do with my hands always sweating.
  • They add a failure point for the device. As someone who has owned several scanners through the years… these things break. On a PC it’s not a big deal, you can run out any buy the same brand thumbprint scanner, but what happens when it breaks on your iPhone? You buy a new phone, and you don’t get your data back.
  • You can’t let a trusted party use your thumbprint when you’re away.
  • You can damage your thumbprint. What if you burn your thumb while cooking dinner? You could be locked out of your computer for weeks, or in some cases, permanently.
  • There hasn’t been much study on this, but thumbprints could be prone to hash collisions, especially if you are forced to scan a “backup finger”. Different biometric technologies, or course, will vary greatly.
  • Your actual thumb could get stolen. Don’t laugh, it has happened. Some thieves are willing to cut off your thumb if it will give them complete, unrevokable access to your entire life.
  • Biometric hardware does not give standard readings, so you’re at the mercy of a third party to maintain access to your accounts. If you replace one piece of hardware with another brand, it will probably not give the right hash.

Passwords, of course, have problems of their own, but they are still the most secure and sensible way to protect your data. Even something easy to remember like “iLikeBag3lsAndCr3amcheese_” is incredibly secure for years to come. The thing is, security is up to you. Your password should be long. It should have caps, numbers, and special characters. You should not use the same password twice. It should not be obvious, and you should not write it down. Follow these rules and you have very little to worry about, besides forgetting it.

What’s the right way to go about authentication? I dunno, I’m not a security expert. I would say multi-factor authentication is always best, so maybe a password-protected SMART cart would be the best way.

Icon Fonts: The Vast Wingding Conspiracy

Icon FontsThe latest trend in web design is to use fonts to do render glyphs, in place of the img tag and the still-elusive SVG. I’m sort of on the fence about this, although I recognize there are definitely some good reasons to do this. Instead of blathering on about theory, let’s go to the chalkboard, shall we?

Positives:

  1. Less messing with Photoshop and/or sprite sheets
  2. Load images faster
  3. Advantages of infinite scaling
  4. Advantages of CSS (shadows, hover, etc.)

Negatives:

  1. Limited to the glyphs in the font
  2. Limited to font limitations (no color, no texture, etc.)
  3. Bad semantics
  4. Potential for bad 508
  5. Potential for bad SEO
  6. More verbose than img tag
  7. Potential to cause your site to suffer “Wingding Syndrome”
  8. Necessary to load an external font, including hundreds of icons you won’t be using
  9. @font-face, and the baggage that comes with it

Hmm. You know, I was actually meaning for this article to sing the praises of icon fonts, but I’ve suddenly changed my mind. Instead, we’re going to have a fair and balanced look at the above. So without further ado, let’s pit the two technologies against each other…

Logistics
The less dealing with Photoshop for icons, the better, right? Well don’t forget that you’ll also have to deal with font creation software instead (gross) unless you want to use stock images for your icons. That’s fine, except we went down this route in the ’90s with Wingdings 1-3. I still have nightmares to this day.
Also, don’t forget you will need to store and serve several copies of the font. The IE version, FF version, Chrome version, mobile version etc.
Winner: images

Browser and Loading Time
In theory, vectors will load faster than a sprite sheet, right? Here’s the thing. A font is a giant sprite sheet already… well, a vector sheet, but a sheet nonetheless. You might have a 20k PNG representing your icons, but you might have a 60k font representing tons of glyphs, 90% of which you will never use. But let’s say the font is smaller. So it’ll load faster, right? No. Custom fonts can take a few seconds to install, etc. before being applied to the website. In the meantime, you’ll have random letters sitting around you page while it waits for the browser to do its thing. Is it worth it?
Winner: images

FX Advantages
What has always sucked about CSS is the lack of support for image effects. You cannot add drop shadows and change colors (at least, not in the way you’d expect). With glyph fonts, now you can. So I’ll concede to fonts for this, but the question I want to leave you all with is… why do you need to be able to have hover effects and shadows on your glyphs?
Winner: fonts

Scaling
This is another brilliant argument for font glyphs. Img tags are bitmap only at this point, and adding retina resolution is a huge pain and there is no best way to do it.
Winner: fonts

Semantics
You need to add an icon to your page, which is more semantic: <img src=”email.png” alt=”Email:”> or <span aria-hidden=”true”>&#x25a8;</span> ? Yeah. And what do you do about those with visual impairment?
Winner: images

SEO
How do you think Google feels when it has trouble reading your website? You know that friend of yours that uses Outlook and the smiley faces turn into letters in your email client? J
I know you can use “content:” in CSS, but that’s not going to save you from Google, and “content:” is something you should always strive to avoid using. And it may not be happening now, but eventually black-hat SEO companies could start using ROT-13 rotated fonts to hide stuff they don’t want Google to necessarily see. I don’t think they’re doing it now, but there’s that possibility they’ll spoil the fun.
Winner: images

This reminds me of when we were using Flash and Javascript to render custom fonts 6 years ago. It’s overkill. Wait for SVG, and in the meantime, just deal with bitmaps unless you have really good reason to use this method.

In closing…
Don’t. Just don’t. Unless you plan to build an obnoxious website with tons of glyphs all over it.
Which you shouldn’t.

The New Way to do Event Bubbling

All Javascript developers should be familiar with event bubbling. For those of you who don’t know, event bubbling is when DOM events move up the chain from bottom to top. In other words, if you click on a <li>, the <body> will get clicked first, then the <ul>, then the <li>. In IE, of course, it does the exact opposite (“event capturing”), but with the advent of jQuery, this is pretty much a moot point.

So why is it important to know? Well, imagine you’ve attached a click event to an <li>. It may not be a problem now, but if your <ul> ends up with thousands of <li>s, you’ve got thousands of bindings in the DOM, which is going to be a performance killer among other things. Instead, simply attach the click event to the <ul>, then inside the event, figure out what <li> got clicked on and react accordingly.

By the way, this is an interview question for every Javascript-related job ever. Know what it is and why it’s important.

I was going to post a simple example on how to do this, but apparently this is entirely the point of jQuery’s new “on” method. I use “on” all the time, and you should too, and if you are still using “delegate” or the dreaded “live” to bind events dynamically, you should start using “on” instead. So anyway, here is how to use “on” to efficiently bubble events:

$('ul').on('click','li', function(evt){
alert("cream cheese");
});

What this code is doing is binding to the <ul>, but only firing the callback if a child <li> node was targeted. I’ve always used $(document).on as a force of habit, but really you should be using the parent of the object you want to bind to. Folks, it doesn’t get any simpler than this. Sure wish I understood this months ago….

Project Workflow for Lone Developers

I’m not ashamed to admit I’m a designer-developer hybrid. I worked as a graphic and web designer for several years. I did back end development professionally for 4 years. I’ve done UX development professionally for 3 years. I love design and I love coding, and I love doing both at the same time. So it’s not uncommon for me to build entire web applications by myself. This practice gets a bad rap because developers are typically awful designers, and vice-versa, but for me it comes naturally.

I’ve been designing since age 6 and programming since age 11, and never quite knew how I could merge those talents. Since kindergarten, everyone always told me I would grow up to be an artist, but I wanted to be a programmer. Once the time came when I needed to choose a major, I chickened out at the last minute and chose multimedia (I hated math and still do). Back in 2001, CD-ROMs and VB were king, and Director and Flash were still in their heyday. That was how you built interactive applications. But slowly, web and mobile took over this space, and bridged the gap between design and development. I was lucky to be caught in the middle of that merge.

Throughout the years, I’ve typically been unmanaged throughout the web development process, since the stuff I do is usually highly experimental. Because of this, I’ve developed and refined my own process for development that seems to work great for me. Your mileage may vary, but I’ve found this workflow to be the winning combination, especially for projects where I’m going it solo.

Lone Developer Workflow

  1. “Liveframing”, what I call wireframing with HTML. Create a preliminary GUI with no design, just basic structure. I prefer this to wireframing in most cases… honestly, I’ve never been a fan of wireframing tools, and I avoid them whenever possible. It depends on the project though.
  2. Mockup. Based on your liveframe, use Photoshop to design what the final website will look like. You want to throw a bone to the client to keep them busy awhile, but you also want to put a vision in your head of what you’re working towards.
  3. Database schema. This is the third thing I usually do, for two reasons. One, after building the GUI I have a pretty good idea of what data I’m collecting and how it will be used, and second, I want to do this before starting on the back end. I usually use Excel or pen and paper to draft a schema, and then build the actual tables as I need them. The schema will always change from start to finish, but usually I nail it with 90% accuracy. And usually, I end up needing fewer tables than I had originally schemed.
  4. Back end development. Once I have a barebones liveframe and a schema, I’m ready to start back end development. Of course I start in the planning stages, figuring out which pages do what, how the API will work, .htaccess considerations, etc. and generally decide how communications will be coded. Communication formats will also be decided in this stage (XML vs JSON, data structure, REST considerations, etc.). Then, I start coding, and hook the liveframe up to the code as I go for testing purposes.
  5. UX development. I start elaborating on the liveframe by adding the necessary Javascript and jQuery.
  6. Test, test, test. As I move through my prototype on the front and back, I add or modify decisions for both sides. The pieces slowly come together. The client should be engaged during this time to verify the project is functioning under the proper requirements.
  7. Once the project is 90% solid, then I start slicing the front end. The liveframe’s header, footer, and CSS will be replaced with the new design, and if you did it right, it should pop right in.
  8. Beta and QA testing. This is probably something you don’t want to do yourself. Find friends willing to test it out.

Behold, and Impart My Learned Wisdom Unto Others

Bartek’s Law of Coworking
Nothing says the digital era like piling people into a downtown office building with tons of talent and zero ideas.

Bartek’s Law of Private Sector Employment
Make the boss love you. Make management respect you. Make HR fear you.

Bartek’s Law of Project Management
“Man, it’s really hard to find developers. Let’s add more esoteric technologies to the stack and hopefully that’ll make hiring easier.”

Bartek’s Law of User Experience
“The client called. They’re worried that by simplifying the design, you’re confusing the user.”

Bartek’s Law of Software Engineering
The only career path where more money gets you fewer of the opposite sex.

Bartek’s Law of Google Image Search
No matter what the search term, you will always end up with furries in the results. Once you see the first one, you’ve reached the end of relevance.

Bartek’s Law of Design
Apple sets all trends, because that’s what your boss and clients want. If that means ushering in the return of early ’90s hypercolor, then so be it.

Bartek’s Law of Web Design
Take any random picture, blow it up and add 1000% gaussian blur. Add some aquamarine and coral buttons. Congratulations, you made a website.

Bartek’s Law of IT Jobs
Everbank has had those jobs posted for 4 years now. Ignore those, they have no clue who they want to hire.

Bartek’s Law of Front End Job Hunting
The search term you’re looking for is not “frontend”; it’s not “front-end” either. The standard search term is “ninja“.

Bartek’s Law of Modern Web Development
Yo dawg, I heard you like having to learn 5 languages. So we put languages on top of those languages.

Bartek’s Law of IT Careers
You can live where there are awesome jobs. You can live where life is relaxed and easy. But you can’t live in both at the same time.

Bartek’s Law of Search
If you’re searching the web and can’t figure out why Google is giving you unusually awful results… you’re accidentally using Bing again.

Bartek’s Law of Photoshop
“Client: My 11 year-old nephew knows Photoshop. I’ll have him design the website to save money.”

Bartek’s Law of Art School
Congratulations, you graduated. Hang that piece of paper on the wall and commence to starving.

Every year, it goes something like this…

CEO: I just read in Forbes that…

2013: …all websites should be written in Scala. Let’s rewrite our Java to Scala, by next week.
2012: …relational databases are dead. Let’s migrate 30 years of data to MongoDB, they’ll be around forever.
2011: …all websites should use responsive design. Make it pop.
2010: …all websites use Ruby now. Let’s hire Rails experts; there should be tons of them, and willing to work cheap.
2009: …millennials only buy through social media. Let’s make a Tweeter, whatever that is.
2008: …SEO is the future. Let’s buy tons of backlinks, spam up blogs, and set up microsites. No way this plan will fail.
2007: …all websites should be web 2.0. Let’s market ours as web 3.0!!
2006: …all websites should use Flex. Let’s rewrite our entire website in Flex, but don’t use any Flash.
2005: …all websites should have a blog. What, you mean we have to pay someone to write articles for it?
2004: …all websites need an RSS feed. Content? What’s that?
2003: …IE won the browser wars, and no other browser will ever compete. Let’s only write for IE now.
2002: …ASP.NET is going to revolutionize the industry. This is how all web applications should be developed.
2001: …we need a Flash landing page to sell customers on our branding. Make it so.
2000: …e-commerce is the future. Let’s sell pet food online and advertise at the Super Bowl, this is going to be HUGE.

Twitter is the Anti-Internet

And that’s why I don’t use Twitter. Though there might be one or two other reasons why…

1) failwhale

2) Too much noise, and not enough quality noise. Just a line or two of contextless text, devoid of media, and mostly things that don’t interest anyone by the author. At least Facebook has a richer media experience, and a way to shush people that get on your nerves.

3) It’s creepy. My girlfriend once had a habit of Tweeting every place she went to (before Foursquare replaced this, which is an equally creepy service). What ended up happening is a stalker started following her around and “accidentally” showing up in the same place as her all the time. Not cool.

4) Hashtags are a bad way to tag your content. Why should tags take up part of your character limit? This forces Twitter users to make short, one-word hashtags that don’t make sense, or omit hashtags. What Twitter should do is have a separate bank for hashtags, where tags are automagically added, manual tags can be added, a more user-friendly way of inserting and editing tags (instead of just text), and there are fewer limits to the number of tags. There can still be a tag limit, but it should be per tag, not per character.

5) The way URLs are used is sick and horribly wrong. On the Internet, do you ever see a raw link with no context tossed into the end of your content? Of course not. We have this thing called a hyperlink, and Twitter should be using them, instead of encouraging raw links. And as with hashtags, links should not count as part of the limit, as they have nothing to do with content, and users shouldn’t be forced to bit.ly all their links.

6) URL shortening services don’t help anyone with their SEO, and they certainly don’t help Google. People and robots want to know where they’re going and where users came from. Forcing people to use these services hurts the Internet.

7) Very little profoundness and meaning can be construed in 140 characters. The Internet should be a place to freely exchange ideas without forced limitations. When you force tough technical limitations, you force people to truncate their communication. This just seems like it’s not in the spirit of… anything.

8) if u hav noticd the charactrlimit oftn forces slppy grammr. #AndThatReallyPissesMeOff

9) And continuing entries. #WhichDefeatsThePurposeOfTwitterButPeopleStillDoIt

10) http://bit.ly/16y0Ass #ThanksBitlyThatLinkSoundsPerverted

11) It has been hijacked by corporate marketing departments trying to fake the funk.

12) It is an evolutionary dead-end. What can they really do to Twitter to improve its problems, besides the solutions I’ve mentioned before? They tried Vine, but people aren’t really using it the same way as Twitter… it has become more of a comedy thing, because really, what else can you do in 6 seconds? You certainly can’t say anything important. Come to think of it, all Twitter is really successful at is being a low-brow comedy club. That and having petty fights.

13) The flow of information was not meant to be artificially “chunkated”. Have you ever seen the movie, “Fight Club” where the protagonist talks about how when you travel you live a single-serving life? Twitter wants you to have single-serving friends.

14) Due to having to remove words and otherwise change the meaning of your sentences to fit the limit, you end up saying things you wouldn’t normally have said. And because of that, people will misunderstand you. And because of that, you will offend, confuse, or bore.

15) It doesn’t do anything Facebook doesn’t already do, and Facebook does it better. Why use a whole other service for such a simple system?

16) Hashtag ambiguity. #nowthatchersdead (Now Thatcher is Dead? Now that Cher is Dead?). Granted domain names have the same problem, but this is the 21st century, we can do better.

17) Unnatural conversational threading. Most of the time, all you catch on someone’s Twitter is a partial conversation that, taken out of context, makes no sense at all. I’m tired of seeing tweets like “@somerandomguy I agree #yolo #swag”. And then I have to expand the conversation to see what the heck they’re talking about, which is usually equally uninteresting. No other website threads a conversation like this, because it makes no freaking sense.

18) Most people on Twitter are there because they need to be, not because they want to be. Celebrities, journalists, and businesses, and the people that follow these three. They’re on Twitter because it is what it is. And if celebrities and their followers use it, that’s a good enough reason for me not to.

19) Mourn-bots, and the shallowness of it all. Half the time everyone is in a mad rush to say things before everyone else does. We are all familiar with “RIP Stave Jobs” and “omg rip @stovejabs” repeated 100 million times whenever someone famous dies. I know I sound like a horrible person, but whatever. I don’t need the peanut gallery to tell me a celebrity died. I’ll find out.

20) Not really Twitter’s fault, but it gets overused in the media. Nothing irks me more than reading a serious story and then seeing what Flava Flav has to say about the Detroit Bankruptcy on his Twitter account.

21) According to journalists, it’s going to start some sort of social revolution in every country on Earth. That might be good or bad, but if it involves your average Twitter users, I’m guessing it’s going to be bad.

At any rate, it is pretty successful at inciting mobs.

This Ain’t Your Dad’s Job Market

Time and time again, you see bad information about how to get a job. Times have definitely changed now, and they continue to change. There are so many recruiters and employers out there doing so many different things that it’s hard to really have any sort of rules about how to and how not to get a job, but I’ll at least share what I’ve personally noticed. What follows is weeding out the facts vs. fiction of job hunting.

1) Cover letters are important.
False. They’re a waste of time. The only time you need a cover letter is if you’re sending an email directly to an employer. The email is your cover letter. So it’s important to know how to write a cover letter, but only as a brief intro to your resume.

2) Never go over one page on a resume.
False. I fell for this garbage suggestion for a decade before I realized how stupid it was, and how many jobs I was getting passed over on because of this. 1 page makes you look like a noob. You should shoot for 2-4 pages. If you’re an executive or trying to get a government job, at least 8 pages. Bottom line is, unless you’re just starting out, it should be impossible to have a one page resume. For each job you worked, you should be describing in some detail what you did at the job and how it impacted the company, not simply skills used and hats worn.

3) Pack your resume full of buzzwords so that the computer can find you.
True. This is extremely important. Practice good “Resume SEO”. However, just like real SEO, it can be overdone to detriment. Do not use buzzwords for things you don’t know how to do. Don’t list old-school technologies like ASP Classic, HTML4, Flash, COBOL, etc. unless you know what you’re doing. Avoid using corporate buzzwords like leverage, synergy, low-hanging fruit, incentivize, team player, etc.. Don’t hide buzzwords in small white letters at the bottom of the page. And certainly don’t pack your resume so full of buzzwords that it has a hard time saying anything coherent.

4) Always have an Objective in your resume.
False. Throw that crap away, nobody cares. Replace it with a summary instead. Keep it brief, and don’t tell your boring life story. Also, throw away your high school and college stuff. Nobody cares what your GPA is, and they certainly don’t care about high school. Finally, check for typos. A typo on your resume will get a first-class ticket to the trash can.

5) Employers are bored with template resumes. Go crazy with the resume layout.
False. This is a huge gamble, and should only be done if you’re a designer. The problem with fancy resumes is the computers don’t know how to read them, and colors may not turn out on the office printer. You also will have a hard time fitting important information if you’re trying to make shapes with the paragraphs, etc. Also, don’t put any pictures of yourself or your work in your resume. That can go in your portfolio. Lastly, don’t use “resume paper”. It’s just silly.

6) Education is everything.
False. Portfolio and work experience are everything. If you have neither, you’re going to have a hell of a time getting a job — but you will eventually. Keep with it, and eventually an employer will take a risk on you. Make them glad they did. Education is unnecessary if your job is in demand.

7) LinkedIn and GitHub are the new resume.
True, pretty much, although GitHub is more of a portfolio technically. But LinkedIn is definitely the new and improved resume system.

8) Certifications are everything.
False. They help, but they’re usually not necessary to most jobs. If you’re shooting for a job that’s $100k+ though, it’s definitely a good idea.

9) Constantly follow up during the process, calling the employer and sending them thank you cards.
False. Most employers hate candidates that do this. Their time is valuable, quit pestering them. They don’t want to hire you, so deal with it.

10) Networking is extremely important.
True. The old adage, “it’s not what you know, it’s who you know” is true, always has been true, and always will be true. Get on LinkedIn and start making contacts. Go to conferences and trade shows. Get friends to introduce you to their contacts. It’s a silly game we play, but this is the reality of business.

11) Recruiters are your friends.
True, when you need them, but they’re also your worst enemy when you already have a job and 100 of them are beating down the door trying to talk to you. This is a good problem to have, though. Let them call, and get to know them over Thai food. When the time comes, you will be thankful. These people are like real estate agents, but for jobs; free consultants who genuinely want to help you. You should always have 2-5 recruiters you talk to on a regular basis. Anymore than that and they’ll drive you crazy.

12) Don’t discount Craigslist as a source for jobs.
True. But this is probably a last resort. I generally start with Indeed, which covers most job websites. It used to spider CL, but knowing them, they probably sued. Also, get your resume out on LinkedIn and Career Builder, as those sites are used to harvest resumes all the time. But keep in mind: you shouldn’t have to look for jobs yourself, you should know enough recruiters to let that magic work for itself.

13) Your credit score is important.
True. Believe it or not, many employers check your score, and it helps to have a good one. But what’s more important is to make sure you have a clean reputation on the Internets. If your MySpace is still sitting around — delete it. Search your name + city, full name, street address, email, and username and make sure anything weird is cleaned up. Also, get your name out of all those robodirectories to avoid future issues. If you’ve got a mugshot you can’t expunge, then, well, you’re screwed I guess.

14) If at first you don’t succeed, you’re doing it wrong.
True, in almost all cases. If you aren’t even getting calls, there’s probably something seriously wrong with your resume. Read up on how to write a resume, and fix it. If you’re not passing interviews, then read up on how to interview (be confident and have all the answers!). If you’re still failing, you may be over or underqualified for that type of job, and you should try something else. Try different search keywords that mean the same thing. If you’re a web designer, try “web developer”, “mobile design”, “front end”, or “ux designer”. They all mean basically the same thing, they just become more specialized and professional-level.

15) Hire a life consultant to help.
False. Don’t waste your money. If you can’t figure out how to get a job from yourself, you’re hopeless. Go back to school and learn something else, or move somewhere where there are jobs.

16) Start saving for retirement at age 70.
False. Start saving for retirement at age 40, heh. Because nobody will hire you in IT past that age.

17) Racism and sexism are rampant in IT.
False. For some reason IT jobs are just popular with white guys. Generally, there’s really no such thing as the employment gap; actually, women will be better-paid then men in a short number of years. There’s definitely an ageism problem though.

18) Don’t let other employers know you’re interviewing with other people.
False. They aren’t stupid, they know you’re talking to several firms. You might even use it as leverage for a better negotiation. When jobs compete, you win.

19) Don’t burn bridges when leaving a job.
True. Never burn bridges, even if you’re getting laid off. You never know when those people will be helpful later on in your career. I personally don’t burn bridges, and what I’ve gotten from this is 1) Offer of double salary to stay, 2) Given opportunity to work remotely, 3) Given contractor/consultant opportunity (twice this happened), 4) Rehired at a significant raise. 5) Given investor opportunities. This is why you don’t burn bridges. You know they were wrong to lay you off, let them call you back when they realize they made a mistake.

20) Refuse an offer that doesn’t pay well.
False, unless you have a backup job. It’s better to continue interviewing elsewhere after being hired, than to stay on unemployment or worse. Employers take this risk when they lowball your salary.