My thoughts on the software itself will be in a later write-up. First, I’ve managed to talk a lot of people into switching to Sublime Text in just the past week, and would like to use this space to archive some of the better plugins. This is not intended to be a very informative review as much as a list I can point new people toward for things they should grab to get started.

Package Control
For the complete beginner, this is the first obvious requirement to enjoyment of Sublime. Packages (add-ons) are done through git repositories, and installing this guy first will limit you to hopefully just the single command line install. After that, you’ll use Package Control to install the rest.

Apache Conf Syntax Highlighting
The built-in color schemes don’t have every type of file set up. Installing this plugin will give you coloring / syntax highlighting for .htaccess, and other Apache config files.

Doc Blockr
If you’re like me and believe ALL functions should be preceeded by PHPDoc style comments, this will help you along with that.

Emmet
Emmet is a beast of its own. More than a simple package, it brings a lot of very cool shortcuts to your HTML/CSS work. I have been especially infatuated with the abbreviations.

Git
Because you use Git for all your projects already and would like the functionality built in.

GitGutter
This will indicate changed lines in the gutter as you work.

SideBarGit
SideBarGit will add your git commands to the side bar for easy access to commands.

JavaScript API
Auto complete for JavaScript and JQuery

SFTP
This is the addon I’ve found the most impressive. If you need to work remotely and want your files auto-synced as you save, this gets it done. You can even set up FileZilla style multiple connections for faster transferring.

SublimeCodeIntel
This package turned into a necessary evil for me. One of Sublime Text 2′s biggest downfalls is the lack of short cuts to Symbol Definitions and other auto complete aspects. While these are out of the box in version 3, its just not ready yet. I found it very difficult to install and get working, but once I had it, totally worth it. I’d write up some nice instructions on how I finally got it going, but I’m not even sure what I did.

SublimeLint
Another problem with Sublime out of the box is lack of syntax error highlighting. Lint will take care of that for you. (note: there’s also Linter, but I was not able to get it to work properly. Lint worked great.)

Tag
Auto close those HTML tags as you go.

Tagged with:
 

If neither of the terms graceful degradation or progressive enhancement are in your every day vocabulary, click those links before reading on. Seriously. As we all (should) know, these methods are a nice / modern way of supporting multiple browsers while altering expectations instead of spending hours on hacks because someone is insisting ALL browsers must look the same.

The spirit here is about altering our expectations in design or user experience across platforms, while maintaining usability in even the older browsers. Browsers with a very low share should still be able to read data, perform actions, or somewhat flow with whatever the purpose of your site is. They just don’t get the liberty of things like certain design treatments, or the more advanced client side functionality.

After contracting some design and front end work to a company recently, I pointed out a couple problems with IE7 and IE8. These weren’t silly things like missing rounded corners, but serious problems like invisible input boxes on a site that relies on forms being filled out. This is the response I got:

Only 1% of all web users use IE7, that 1 percent is dropping fast. You can see the stats here:

http://www.w3schools.com/browsers/browsers_explorer.asp

IE 7 prevents using many modern coding techniques that are used on all other browsers.  If someone if still using IE 7, they can’t render most sites on the web the way they are intended. Making it flawless in IE7 will lower the quality of effects on all other browsers.

This attitude has become a virtual poison for the web. No one said flawless… I just wanted the form to show. IE7 probably really can’t render most sites “the way they were intended”, but it seems to me if you’re following the right practices, most browsers don’t. You code in one doing the best job you can, and tweak for the others when necessary. If you have only one intention for how the page should show, you’re probably doing it wrong.

You should make your sites “work” in all browsers. The word “work” doesn’t have to translate to the same, or even similar. Personally, I still stand by supporting all browsers that are still in a support life cycle. That means IE6 is officially out, but 7+ lives. Decide for yourself what “work” means based on your goals and personal analytics. Forgive yourself for the rounded corners not showing up everywhere. Just make sure its usable for the other guys.

Last time I blogged about this I got myself in all kinds of trouble and fame; however the fact remains that two weeks ago I was laid off. My job ended with a Saturday call that consisted of little more than “don’t come in on Monday,” hurling me back on the other side of the interview desk. I really hate to say it, but it felt like a direct repeat of the last time. It also lead to the question – “Do I take down the Selling Source articles or not?”

I’m not going to go repeating the past too much here, but for the few of my readers who have no idea what I’m talking about, last time I was looking for a job I decided to write about my adventure. I talked about a couple places I went to and their interview process. One place in particular had their panties in a bunch and tried to get legal. The only really important thing to note here is they were not written in anti-company ways, I made fun of myself as much as others, and all I did was tell the truth. They didn’t like it, I didn’t like their reaction, and so on.

We’ve all heard horror stories of things people post on the Internet affecting their job, school, family, etc. All too many people don’t understand that what you write here is for keeps. Deleting information means virtually nothing once it has been exposed. Yes, I Google everyone I know, and your future employers will do the same to you. But where do you draw the line? The answer for me cannot just be that I live in privacy.

There are plenty of people out there who put their professional hat on for much more of the day than I do. I tend to live the lifestyle of being me, and only that. This means that there’s really not much I’ll say behind closed doors than I wouldn’t say at work or anywhere else. As always, I do not recommend my lifestyle for others. I just put it out here for your entertainment and learning purposes. However this time I was actually considering a little damage control on some things I’ve said that potential employers may not like.

Of course, one of the first calls I’d gotten was with a recruiter for S****** S*****. I fessed up immediately letting him know they might not be so keen to me due to previous discussions and legal matters, and he instantly knew who he was talking to. I didn’t even have to tell him. The funny part was being called a celebrity as he was quite excited to be talking to the author of the S***** S***** articles. The less funny part was the follow up of, “but of course I can’t consider you for this job!”

We did have a good talk though. I asked him, were he me, if he would take them down. His answer, being a recruiter for a very long time with many crazy stories, was “absolutely.” Writing damaging things about a potential employer will set off a lot of red flags with other potential employers. Even if this employer knows they aren’t the kind that would do what I called out for, they naturally have to ask themselves if I am a liability and will do the same to them as soon as we disagree.

He asked me if I would hire me after those articles. I said yes, and I stand by it on one principle – the article was honest and not a “LULZ THESE GUYZ SUCK” comment. It also wasn’t anything like this. However I also freely concede that I do not believe I’m in the majority in my thought processes. A company has to watch its back, and what’s it to them if they grab the next cool guy that knocks on their door after me? Not much. This train of thought actually did lead me to pulling down the articles, and I thought perhaps I learned a lesson, but it turned out to not be that simple.

The next day, another call with another company called me because I was the guy tho wrote those articles! The score was now tied at 1 – 1, and leaving me very confused. I’m really not often times rewarded for my honesty. While in the end I didn’t go to work for them either, I will say we had a good laugh about people who give “code challenges” followed by a really awesome 30ish minute technical talk that I couldn’t have possibly made it through without proper experience. I’m now very fuzzy on the lesson to be learned here.

Here’s what I have learned:

  • The “direct” traffic on my blog tripled once I started sending my resume out again. These are people who type the URL in directly or click it in something like a Word document. Employers were absolutely checking me out before and after talking to me.
  • My Linked In page search results also went through the roof. This means they were using my provided links, and looking for me in other spots.
  • 2/4 companies I had more than initial phone calls with had code tests to give. Somehow they’re still doing this and it sucks. At least one of them was kinda fun.
  • In 13 days I had 1 offer, and 2 “just about to get an offers.” While I don’t know how many passed up on me, it didn’t hurt me that badly. Or companies here in Vegas are crazy desperate.

Regardless, it is insanely important you watch what you say and understand you’re never private on the Internet even when you think you are. The level you want to protect yourself is really up to you. I choose to just be myself and let people just that no matter who they are, but I certainly can’t fault you if you feel otherwise. Your line of work also greatly influences the threshold of acceptableness of your actions. Learn from me, and judge for yourself. Just error on the side of caution as you never know who may look you up next.

Almost every new toy we’ve gotten in web development has gone through the same cycle. It is invented, we gladly use it as it saves us time, we over use it, we get sick of it. JavaScript was the king of this cycle with the mouse pointer following stars and scrolling status bar text. Too much of a good thing is always exhausting in the long run and usually causes us to over compensate the reduction of it. Right now we’re at the tail end of this cycle with AJAX. It has been over used and evolved into something where its own name doesn’t even make sense anymore. Who still uses XML on new projects? At the peak of its usage, we had big players like Facebook and Twitter doing entire page loads through AJAX to try and save the user from a refresh. However earlier this year even Twitter has said its time to knock it off and go back to traditional stateless requests. This doesn’t mean its time to ditch AJAX, but it does point out the need to control it and start better defining how we can best use it.

As I find myself regularly in the position of defining, enforcing, and utilizing site-wide standards, I’ve come up with two simple rules with regard to AJAX. They do not apply to every website out there, but I do believe them solid enough to say most. They were inspired by taking a step back and looking at how websites feel to a user when directly compared to how much AJAX is used, if at all. Websites that do not use it at all get that gross .NET postback sluggish feeling. They’re most commonly seen in large corporate websites like a bank. I have the ability to look at my transactions, but even sorting a table is a painful wait with a rough transition of one data state to another. On the other hand websites that load entire pages via AJAX sometimes feel more fluid, which makes the wait less painful, but the delay and some potential choppiness is still present. Its damagingly too easy to get excited about a technology getting quicker to implement every time you work on a new project. Forgive the cliché, but moderation is key.

Rule 1: Update it how you loaded it

If you want to have a data table full of widgets that can be sorted, filtered, paged through via AJAX calls, then do the initial data population though an AJAX call once the page is loaded. In other words, do NOT do this:

1. Load data in server side language while the page is being compiled.
2. Pass said data to the view layer
3. Loop through data to create an HTML table with server side language
4. Write JavaScript or insert jQuery plugin to manipulate the DOM of created table

This is the most common implementation I see, and frankly the most incorrect. Yes, this method will take you from page request to page loaded the fastest, and yes this is certainly the easiest and fastest to implement. It also turns into an absolute nightmare to maintain. This method forces you to format your HTML table in the view on the server side, and duplicate this HTML in your JavaScript or AJAX Response when comes time to change the data loaded in this table. That alone breaks one of the very core reasons we use a server side language – to have the ability to change HTML in one place rather than many. It also deviates from very old and established basic development principles. In this case our data table is an object. You don’t construct an object out of data; you construct an object and then give it data. Therefore this process is far more appropriate:

1. Create an empty table shell in server side language
2. Output table shell from view layer
3. Once page is loaded, JavaScript requests an initial load of data via AJAX for the table using the same methods it would use for sequential data loads

Now maintenance is all done in one place, and our code is more true to object oriented design. Where you wish to perform the HTML / data parsing is up to you. If done in moderation and with a proper template, crunching the data though your template engine and returning HTML via AJAX is okay, or returning just raw data and wrapping HTML around it to then place in the proper spot in the DOM is okay too. If you’re going to choose the former, just be sure to not code yourself in a corner if you want this same data in another format later on. Use something like a mangler.

Most importantly, this set of steps has transformed AJAX from the previous over/under usage descriptions to a tool that loads and manipulates data on an already loaded web page. We no longer need to refresh the page to sort a column, nor do we need to reload the entire DOM. Which nicely segues us into the next rule:

Rule 2: Use AJAX for loading data, not layout

Transmit data with indicators of layout; not the layout itself. The AJAX process is most commonly seen returning requests in the format of XML or JSON. Both of which by definition are for transmitting data. Naturally we cannot take this as strictly as saying we only transmit plain text, but a line has to be drawn as to what may and may not be transmitted. As mentioned earlier, the more you tailor the AJAX response to match the HTML already loaded in the browser, the more difficult it is going to be to manage. Therefore it is alright to transmit data back with some text decorating HTML, or even some minor layout tags (table rows, but perhaps not an entire table itself). What is not okay is to rely on AJAX calls to alter entire layout segments of HTML on a web page, or the entire page itself. This quickly becomes too heavy of a load to transfer, and enough work for JavaScript to have to do for the user to feel the processing effect. Here’s some examples:

I have a table that I can sort through AJAX.
Good: AJAX request returns plain data in an array that JavaScript can parse into new rows; or in some cases the request returns the data wrapped in <tr> and <td> cells.
Bad: AJAX request returns the entire table or more surrounding HTML.

I have a photo gallery where the user can click a button to cycle through images that are requested independently through AJAX.
Good: AJAX request returns an array of or single image properties (src, dimensions, etc.)
Bad: AJAX request returns the image tag and the surrounding div container. JavaScript then replaces the currently loaded container with a new one.

I have a site search that uses AJAX to load results. These results are also formatted to where the searched phrase is highlighted.
Good: AJAX request returns the search data and a simple container around the data such as a <li> or <div>. The highlighted text has a span tag with only a simple class attached to it. The actual highlight color is in my .css file.
Bad: AJAX Request returns an entire <ul> or <table> of data. The highlighted text comes back with spans that use style rather than class

These examples all walk a fine line that is defined differently for every website, solely dependent on how the front end developer wrote the layout in HTML. However the fact remains that the tags we return are indicators of placement and style, rather than relying on the returned HTML to just work on its own. If you really need to load this much HTML, then ask yourself if this perhaps should just be a page refresh rather than an AJAX request.

Tagged with:
 

Good developers always appear to have a higher upfront cost than the not-so-good ones; however in the long haul you often end up spending about the same amount. This is a conversation I frequently have with potential clients. A poorly made product is both harder to expand down the line, and more apt to break in unexpected ways. While a properly architected application will allow for expandability without disrupting the proverbial beehive every time you need to make a change. The same thought process can be put toward the developer himself while doing the actual programming. More upfront work means a longer time time-line, but easier maintenance.

Of course there are exceptions to this very generalized rule. Sometimes you just aren’t sure if the product is worth the time or will even be remotely successful. I’ve been known to slap something together in a pinch as a proof of concept. In the times where these ideas do become successful, its imperative to readdress the areas you skimped in if you want to have a solid product for your customers.

Some people are just incapable of grasping this concept. Last week my wife, a photographer, sought to purchase a pre-built website from the company Blu Domain. The price was spectacular when put next to the amount of time I’d have to invest in making similar, and their designs are really pretty good. However that’s really as far as I can go with the compliments. Once payment was made and we were engaged in this company, the interaction got… fun. Here’s part of the setup email she received:

GODADDY:  SETUP REQUIREMENTS:
PLEASE NOTE YOUR GODADDY ACCOUNT MUST BE SET UP ON A LINUX SERVER NOT WINDOWS

Please answer the following:  (godaddy.com will recommend you NOT to give us questions 4 and 5 for your safety with security. However we need this information to access your MySQL database and you can trust that we will not enter any areas other than what we need to access to get the job done)

1. What is your ftp host name or ip?
2. what is your ftp user name?
3. what is your ftp password?4. what is your godaddy.com username?
5. what is your godaddy.com password?

SETUP REQUIREMENTS FOR ALL OTHER NON-BLUDOMAIN SERVERS:

Please answer the following:
1. What version of PHP does your server have? (php4.xx or php5.xx)
2. What is your ftp host name or ip?
3. what is your ftp user name?
4. what is your ftp password?
5. what is your control panel url?**
6. what is your control panel username?
7. what is your control panel password?
8. what is your hosting company url?
9. Does your server meet *ALL* of the below requirements?
10. Does your server have *all php settings* already set?*****

** control panel is needed to add/edit the mysql database. If there is no control panel then we need the database host, name, user, and password to be sent to us, and the database must allow remote connections from desktop programs such as mysql-front.

***** if the server doesn’t have the settings pre-set to our specifications, please note whether we need to edit a php.ini file or a .htaccess file to set the settings locally

REQUIREMENTS:::Please confirm with your server that they meet the following requirements:

1. PHP 4.xx or 5.xx with:
  a. safe-mode must be disabled (turned off)
  b. register_globals must be set to on
  c. max_upload_filesize set to 6MB or above (or .htaccess
      files that can change the php.ini settings for individual
      accounts must be supported)
  d. extensions_dir setting in php.ini MUST EXIST. *** very important ***
      For windows servers this folder must be located
      on the same drive as the document root and scripts directory.
  e. SecFilterEngine  must equal Off
      SecFilterScanPOST  must equal Off
  f.  php must have php gd library installed
  g. session_path must be set to a folder that exists
  h. thread safety must be set to Off

2. server must allow emails to be sent via php script to any email address, and from any email address

3. server must support MYSQL and must have at least 1 mysql database available (2 are required for the website “JUAN”)

IF ANY OF THE ABOVE REQUIREMENTS ARE NOT MET YOUR SITE WILL NOT WORK PROPERLY

Red flags everywhere, and I’m not going to hit all of them. Your initial reaction is likely similar to mine. Hasn’t everyone been told hundreds of times already to not give out passwords? Not only do they ask for a GoDaddy password if that is your host (some serious potential power in there), but they actually admit – yes, GoDaddy does recommend against this, but you should do it anyway. Trust us! Absurd.

But let’s get back on track here and the big one that really caught my eye. Those server/PHP requirements! Apparently if you are a fan of security or non depreciated functionality, this is not the company for you. The biggest problem being that Register Globals is depreciated in PHP 5.3. Worse, it is being removed completely from PHP 5.4. I told my wife she absolutely has to ask them if this website will break the day I install PHP 5.4 on our webserver, or will the fix to this be a future (soon) free upgrade. She utilized their handy online chat with a little quote of mine as to what I really felt about their skills if those are their requirements, and this is what the representative said:

Outdated code? Really? Because we’re using html5. It’s the latest HTML coding right now. I’m not too sure where you are seeing outdated coding. But if you would like your money back, can you please respond to your ticket you’ve already filled out.

Alright guys – stop side tracking me. I’m trying really hard to stay on topic here. You seriously give a paying client a “Really?”? That’s your idea of customer support? However more importantly, thank you for giving me a real life developer example of the saying “lipstick on a pig.” Yes, I know HTML5 is the most amazing buzz word ever right now, but does it really do you any good when the core of your system is running on things Zend itself has been telling us to stop using for years? I suppose you’ll have some fine looking errors. Later on in this conversation (typos included):

From our senior programers, our sites will continue to work with the upgrade of php in the future. And if there is an issue, i’m sure with over 100,000 sites out there, we would make sure all of them would continue working with upgrades in php, mysql, html, server upgrades, etc…..

Then why the requirements? What does 100,000 or 1,000,000,000 or just 1 site have to do with anything if you say in your email:

IF ANY OF THE ABOVE REQUIREMENTS ARE NOT MET YOUR SITE WILL NOT WORK PROPERLY

Now we circle back to my introduction. Part of their service is that they will install the site for you on your own servers. This is necessary for their business model, but scary as hell too as it turns into a textbook reason as to why you should put best practices into your code from the start. I’m not going to blast them on some things like safe mode (it sucks) but register globals was a terrible idea even before it was deprecated. Its a perfect example of a feature that will make initial development a little faster/easier, but turns your future maintenance into an absolute nightmare. The longer your code gets, the higher the difficulty level becomes to deal with variable names coming in from multiple places. A small amount of work in the beginning would have gone an awfully long way toward where we are now. 100,000 websites is a lot to have to deal with updating. Don’t put yourself in that position.

I very much dislike magic numbers. If your code contains something in the realm of:

if ($some_flag == 1)

You’re probably doing it wrong. What is 1? Likely not much more than an arbitrary number you created to symbolize a status. It could mean something in boolean, but the type of example I’m talking here is only boolean to a human. PHP or any language will see that as an integer. If that’s the case, then use constants. Your code really should instead look like:

if ($some_flag == Some_Class::ENABLED)

No magic number, and easily changeable when you come back to this in the future. However even worse, are magic strings. I saw this the other day and absolutely hated it, but on first glance couldn’t think of a better alternative:

if ($date_field == '0000-00-00 00:00:00')

This string of zeros is how MySQL will store a not null / not specified date field. For a myriad of reasons, having this string in your code is dangerous. Prone to typos, changes, and far too DBMS specific. After playing with data conversions and other miscellaneous date functions, here’s what I came up with and will call the “proper” way:

if ( (int) strtotime($date_field) > 0)

The date function strtotime() will return a negative number when given ’0000-00-00 00:00:00′. That’s good, but we want to be even more thorough than this. If $date_field were a string or null or anything strtotime() cannot deal with, it returns false. False cannot be compared to zero, but the integer value of it sure can. Hence the (int) data conversion.

The zero date string will come back negative, something invalid will came back false and convert to zero. If a valid date string was given, it will be a positive integer. Desired result achieved without the use of arbitrary numbers or strings. This method isn’t perfect, strtotime()’s powerful ability to convert strings like “tomorrow” could produce some pretty undesired results, but you should be expecting a date in this case anyway. You definitely have other problems if a rogue string that can be parsed by strtotime() gets passed in here.

Tagged with:
 

I do not believe in getting so used to a tool that it remains timelessly more worth while than new compeditors. How many times have you heard (or said) these:

“I can’t change editors… I already know all the short cut keys for this one.”

“Yeah, I know that tool makes it easier, but my old tool I’ve been using for 10 years! Its all set up how I want it.”

“This tool is the only one that will do it how I like it. No, I haven’t looked into any others in the last few years.”

In a world where technology so frequently doubles itself and new tools practially pour out into the market, you can’t afford to act so stale and expect to be seen as among the best. With little exception, if you feel what you use on your computer eactly fits your needs, that’s likely reason to start finding new tools. Make your life easier.

Tagged with:
 

Inherent flexibility is what makes PHP tick, and in the end serves to be both powerful and crippling. As odd as it sounds, sometimes when we have a handy tool at our disposal, it doesn’t necessarily mean we should use it. Especially in an open source language that has had so many hands making changes with little common standards among them. One such tool is the ability to suppress errors on a line by line basis. To be sure we’re all starting on the same page, observe:

// This will fail and throw an error at level E_WARNING:
fopen('file_that_does_not_exist.txt');
 
// This will fail and not throw E_WARNING
@fopen('file_that_does_not_exist.txt');

The “at” sign in front of a command will tell PHP to suppress any errors/notices/warnings and proceed as if nothing happened. Sounds handy, but is almost always poor practice. You have better options. For example, if a function is made to throw an exception error, then encapsulate it in a try block. However a try block would not work for the fopen() example above, as it will not catch a warning. Yet this is still not an appropriate suppression example. Rather than hiding potentially useful output, consider:

if (file_exists('file_that_does_not_exist.txt'))
{
fopen('file_that_does_not_exist.txt');
}
else
{
// File didn't exist. Do something about it.
}

The above is a far more elegant solution. We can check in a way that will not error, and handle the scenario where the file does not exist.

So when can we use this? Not often. While I’m sure there are more, I was only able to dig up three very distinct scenarios.

unserialize()

The unserialize() function is a pain. It utilizes absolutely no actual validation of a string before attempting to convert. If we feed it a string that is not serialized, it will both return false, and throw a warning. Like fopen(), a try block will not work. Unlike fopen(), we have no alternate functions to wrap around this check. There is no is_serialized() function. They got this situation mostly right with json_decode()… it returns false when the string is not valid, and we also have access to json_last_error() to better treat our code. No such luck with unserlalize(), so when I don’t know for sure if a string is serialized, I often times do this:

$dest = @unserialize($destination_url);
if ($dest !== FALSE)
{
$destination_url = $dest;
}

Putting the @ before unserialize() prevents it from throwing the warning, and I can rely on utilizing the return false part of the function to determine what to do.

SimpleXMLElement Class

This one is another John Squibb contribution. When instantiating a new SimpleXMLElement, if the given string is not valid XML, the class will throw an exception; however a warning is thrown while parsing the string. Example:

$xml = new SimpleXMLElement('booo');
// PHP Warning:  SimpleXMLElement::__construct(): Entity: line 1: parser error : Start tag expected, '&lt;' not found

Since this cannot be caught, and does not come with any complimentary functions to verify validity, suppressing the error becomes the best choice:

try
{
new @SimpleXMLElement('booo');
}
catch (Exception $e)
{
die($e->getMessage());
}

The try will handle the exception the class throws, and the @ will suppress any warnings during parsing. This gives us the desired effect of knowing when the parser was not able to handle the given string.

PHP 5.3 Style Ternary Shortcut

As previously discussed , the new ternary shortcut is very short and handy, but is practically useless in every day development. We have the ability to say:

$email_signup = ($_POST['subscribe']) ?: 0;

But what if ‘subscribe’ does not exist in $_POST? Certainly possible if subscribe is a check box. We get a warning, and a completely unreliable feature. If you really want to use this short cut, I recommend doing it as such:

$email_signup = @($_POST['subscribe']) ?: 0;

That’s about it

I’m sure there’s a couple more appropriate uses, but I don’t know them. Most of the time you’re using the @ to suppress errors, you have better means. Warnings inside PHP are very old pre-OOP ways of thinking. They are not needed, yet are likely not going away any time soon either. Just try to remember that some of these tools and shortcuts are great, you still need to ask yourself if it REALLY is the best way. Its easy to throw an error suppression in your code in the heat of the moment, and almost always incredibly frustrating when you come back to your work later on.

Tagged with:
 

I’ve noticed a bit of a natural cycle in web developers as they journey from just learning to writing code professionally. As a back end developer, most non technical people don’t even try to comprehend what I do. Its just too mystifying. However to many of us actually in the field, we understand how there is little difference between Facebook, your bank’s website, and Amazon. It all blends together after a while. The learning cycle is a trend that stems from wanting to learn, and realization of the previous. Summarized in this comic thanks to the amazing site toonlet.com.

Phase 1: Reinvent the Wheel Always
PHP / MySQL tutorials have been gone through, and the concept of putting data into a database and pulling it back out is understood. Budding developers at this point have yet to learn the bigger picture and stick to mostly procedural code. Because of this, they also tend to reinvent the wheel at all cost. At first, its an excellent way to learn. The developer learns a lot of lessons the hard way, spends hours (days) on bugs, and puts in their grunt work time. The reasoning for wheel reinventing then matures into something a little more dangerous. Because the developer has locked himself into this trend, he doesn’t quite understand how other frameworks and projects could possibly apply to their work. They’re quick to find reasons as to why their work has to be custom, and defend their decision to the end. The developer is usually stuck in this phase until picked up and set down in a new environment where they see work being done in other ways, and begin to understand the concept of keeping code a little more modular.

Phase 2: Hybrid
The developer has now been exposed to multiple projects. These projects all had wrappers or helpers and he loved some of them for the time saved, but found others laborious. The idea of a framework or why one would ever wrap a preexisting function in their own library is far beyond them. It seems just stupid. However when this developer starts their own project, they definitely have a database wrapper class they’ve created by now, and are more than happy to drop it in without regard to how well it actually fits the project. It saved them before and has to save them again. Some developers never leave this stage. It may even be just fine to stay here. This guy can definitely make any site needed. It won’t necessarily be done well, but if they’re building sites for non technical people, those higher ups will likely never know (or want to know) the difference. This person is cheap, loyal, and now feels he’s just fine being independent. Other developers would dislike him, but small business owners love him.

Phase 3: Converted
Controllers, Models, Views, Wrappers, Helpers are all understood now. Perhaps not totally agreed with, but can be defined and created. The developer is definitely eligible for a senior position providing sufficient experience, and can stay in this spot and be happy for a long time. However this stage is also dangerous. Many fall too comfortable and reliant upon certain frameworks or ideas to the point where they lose a previously held versatility across projects. Prospective technical managers will be hesitant at this possibility and will want to know he can adapt well to a different project. As long as the developer keeps up with trends, design patterns, and new frameworks, they do just fine.

Phase 4: Modularized
Most developers never make it to this phase. Some because they just cannot grasp the concept this well, and others by choice. This developer will modularize every piece of functional code down to as many small functions as possible to create a vast array of Swiss Army Knife tools. If not overly careful, these tools become too convoluted for their own good and only confuse things. When done well, this developer is more of an architect and can be a great asset to your project. If you have one of these on your staff, just be sure to keep him in check. He likely will not do that very well himself. Its up to management to understand and enforce what is proper code, and what is going to get the job done on time. This developer is the pinnacle of wanting to stretch out time lines “to do it right.”

Obviously not everyone hits all of these stages, but I do believe this cycle covers your average new web developer. It seems to me the programming language also dictates where someone will end up. PHP tends to be what you make of it. It can be simple and stupid easy, or it can handle some fairly advanced concepts. .NET developers tend to not make it far through the chain because of all the built in libraries and specific direction as to how to use them. C++ developers need to make it to the end of this cycle as fast as possible or they drown.

The important part is to be sure to put each developer in the right seat. When they’re too comfortable, pick them up and set them down elsewhere, or encourage them to research and implement a new technology. Each of the above have their pros and cons and can still be profitable providing you’re giving them the right projects and ensuring their time is managed well.

Tagged with:
 

Features like namespacing aside, one of the parts of PHP 5.3 that had me initially more excited was the ability to shorten tenary statements even further. I know, some of you are anti ternary all together, but personally I love it. First, a quick refresher. This is an annoying bit of code for one variable check:

<?php
 
if (empty($_POST['action']))
{
	$action = 'default';
}
else
{
	$action = $_POST['action'];
}
 
?>

Not only is it a lot of code, but its just bad practice to create new (and potentially widely used) variable while enclosed in an if statement. Ternary gives us the ability to shorten the above to this:

<?php
 
$action = (empty($_POST['action'])) ? 'default' : $_POST['action'];
 
?>

The previous two pieces of code are identical in functionality. If the above can be referenced to as (expr1) ? (expr2) : (expr3), PHP 5.3 says that expr2 may be removed to give us an even less repetitive bit of code. The problem here, is this new feature is nearly useless. expr1 has to return true or false, so the variable that we’re evaluating has to exist. Given the two above examples, the logical natural progression in perhaps not logic but making our code more efficient would be:

<?php
 
// This MIGHT error
$action = ($_POST['action']) ?: 'default';
 
?>

However the above example is very not advisable. If the “action” key does not exist in $_POST, you have yourself an error. Its a dangerous assumption to make, especially considering were “action” a checkbox and left unchecked, it certainly would not exist. It seems to me that if they had made it work more like the empty function, this shortcut would be infinitely more useful than it is now. Its ironic that not only the best ternary usefulness example that I could come up with involved $_POST and empty(), but php.net also exemplifies with the same thing. The one time this shortcut would be the most useful is the time it cannot be used.

But wait! There is a work around to this! Credit John Squibb for coming up with this (hacky) gem:

<?php
 
$action = @($_POST['action']) ?: 'default';
 
?>

The “@” symbol tells PHP to suppress any error that may result of the line it precedes, thus making this work as desired. Personally I usually am very against using it, but this may be a nice exception (pun!). How dirty you want to feel is up to you.

Tagged with:
 
Set your Twitter account name in your settings to use the TwitterBar Section.