Log Management

June 30, 2008

Another issue we faced in dealing with our SAS 70 audit was log management.  Every system admin deals with this issue, we just ignore it most times.  You have all sorts of information stored in log files on all your various servers.  If you were going to review them regularly, you would probably be doing that just about all day every day if you have more than a handful of servers.  Specifically for SAS 70, we needed to have processes to review things like access logs, backup logs, etc from all of our systems on a regular basis, as well as document this review process so that we could prove someone was actually reviewing the logs.

There are several companies out there with pretty good products in this area, a google search for log management will turn up several results such as LogLogic, EventLogManagement, and Splunk among others.  We looked into several of these, but in our opinion, the best value for our money definitely seemed to be with Splunk.  Basically, with Splunk, you set all your servers to send their log information to a main splunk server (or several distributed ones) by either having syslog or similar services forward the data or installing the basic splunk server on the server itself and configuring it to just forward the data to the main splunk server.

Once all your log data is in the main splunk server, you can simply “search” the logs just like a google search.  If you have everything configured to extract the correct fields you could do a search like user=jsmith to see everything that John Smith has been doing.  What servers has he logged into and accessed.  One very good report that this can produce is when an employee is terminated.  You can see what they access just before they were termined and what, if anything they accessed after they were terminated.  Obviously, the after termination list should be empty.  But that’s just one advantage.

We are still pretty early in our setup and still working on some of the field extraction and report generation, so I’ll likely have some better examples and praises for splunk in the near future.  For now, I’m interested in hearing how other small businesses are handling this issue.  Anyone willing to share?


Defending Against Attacks: Insiders vs. Outsiders

June 27, 2008

I saw this article this morning pointing to a study showing that, contrary to popular current believe, attacks from outsiders pose a greater risk than attacks from insiders.  If you read through the comments you’ll find a lot of people that share the same ideas as I do.  This study doesn’t really seem to be all that valid and seems to make more of a terminology change than anything.  What do you consider an insider versus an outsider?  It seems the study has classified insiders as direct employees.  There’s phrases that make is sound like contracts that have been given access were classified as “outsiders” even though we have given them trust.  I would argue that anyone who has been granted any level of access to an internal system is an insider.

I also think their numbers are a bit off for a few other reasons.  One reason being the people taking the survey may not be giving honest responses.  Another possiblity is not accounting for an attack by an outsider that required insider help.  Many attacks from the outside require somone on the inside downloading and installing an executable or clicking a link in an email.  These are almost always accidental, but I would still classify this as an internal attack, or an external attack that required internal assistance or something like that.

In any case, there are at least a few things any smart small business should do to protect against threats…

  1. Implement a good firewall to keep direct external attacks out of your important internal systems.  There should be no direct access from the public network to any business critical and or sensitive data.  This may require implementing a VPN for any external employees, but these systems are becomming much more affordable.
  2. Train your employees on general security practices.  Teach them how to avoid getting viruses by following some email best practices (don’t click on any executable attachments, etc).  Teach them about social engineering and how to deal with it.  Things like that.
  3. Install Anti-Virus software with on-access scanning on all personal desktops/laptops.

Obviously, there are many more things you can and maybe should do, but I would consider the three above definite requiremetns that will greatly reduce your risk of attacks.


Ransomware

June 26, 2008

I came across this article recently and this related one which really shows why backups are important.  Apparently someone has created a virus which uses powerful asymmetric cryptography (a private and public key pair) to encrypt data files on a user’s pc.  Not only does it encrypt these files, but it then deletes the original, un-encrypted version and then displays a message for the user stating that they can buy the decryption tool if they send an email to a given email address.  Apparently they are selling the decryption tool for $100 to $200.

The first article I linked states that they were suprised by such a virus and never thought they’d see it, but I definitely am not surprised…except maybe surprised it hasn’t been more widespread yet.  With all of the phishing scams going on, this sounds right up the same alley.  Lets encrypt their data, demand $100 which can conveniently be paid by credit card if they give us their credit card number.  Or, let’s demand $100 and then just forget to ship them the decryption tool.  I mean, we are talking about hackers here, so it’s not like they have much in the way of ethics.

Kaspersky seems to be the main player trying to brute force the private key in order to decrypt the data.  However, as many sources have pointed out (see the second article) this really is a fairly pointless exercise.  According to the second article, a Kaspersky employee stated that a brute force attack on this key would take about 15 million modern computers about a year to crack…though other experts say that’s an under-estimation.  That is the entire point of assymetric encryption after all.  And once you do crack the code, all the virus writer has to do is generate a new key and you’re back to square one.

There have also been suggestions that Kaspersky is behind the whole thing as more of a publicity stunt.  Currently, I’m inclined to agree with these accusations.  After all, being in the security industry, they should know the pointlessness of their attempts to brute force crack the private key.  A smarter approach would be to claim to be a victim and pay the virus writer for the decryption tool and then just release it publicly.  Of course, that’s assume the virus writer actually does provide the tool and doesn’t just take your money, but being it’s only $200, I think Kaspersky could take that risk.

So, take it for what it’s worth.  If nothing else, it’s a good warning to back up your data regularly so if you did get infected with this virus you could simply reformat and restore from your most recent backup.  It is also a warning to always be warry of what you’re running on your computer.  As it is a Trojan, it does not self-replicate.  That means the user had to actually launch the executable containing the virus.  It could have been bundled in some “pretty cool” shareware game or something, or simply an exe attached to an email with an inticing name.


Instituting Required Vacations

June 25, 2008

Okay, so today’s post isn’t really directly related to IT very much (more HR policy), but I do find it a very interesting subject that definitely applies to small businesses.  I stumbled across this article this morning about the downfall of people taking less vacations.  Again, it struck me as interesting since we have been recently reviewing our vacation policy as part of preparing for our SAS 70 audit.  As the article points out, peak performance requires down time.  In order to operate a peak performance, both our brains and our body require downtime.  If we don’t give them this downtime, our performance will suffer.  So if you want to get the most out of your employees and ensure their mental health, requiring them to take at least 5 consecutive days vacation once a year would be a great policy.  Looking into that, I stumbled across this article which shows some standard minimum required vacation by country.  Of course, the US is at the bottom with 0 weeks, even below Japan, the country that invented the concept of karoshi, being worked to death.

So we’ve determined requiring a week of vacation of employees is good for their mental health and performance, but it also has many other benefits.  Researchers are linking less vacation time to increase workplace tensions, anger, and conflicts.  All of these things can cost companies large amounts of money in lost productivity, employee turnover, and potentially lawsuits.

However, probably the most important reason for a small business to institute a “required vacation” policy is to prevent fraud.  As you can see from this article, small businesses are much more susceptible to fraud than large companies because individuals usually have much more responsibility with less checks and balances than at a large company.  Obviously, a little more due diligence during the hiring process can help reduce this risk a lot, but required vacations can play a large role in uncovering any fraud that may be going undetected.  When someone is committing fraud, they usually have to doctor records or do something to cover up their tracks.  Very often, this is a very regular task they would have to do daily, or at least every couple days.  If they are required to go on vacation, they will not be there to cover their tracks which greatly increases the likelihood that someone will notice that something isn’t right and uncover this fraudulent activity.  Of course, their other alternative is to begin working with another person so the other person can doctor the records while they are on vacation, but this complicates matters greatly for the fraudulent employee.

Finally, maybe just as important as the reason listed above, small companies should require employees to take at least 5 consecutive days of vacation a year to help ensure they are not too reliant on a single individual.  If your business can’t survive 5 days without one individual, what are you going to do if that individual has an accident or decides to look for a job somewhere else?  If it’s simply a problem of man-power, consider a temp agency to help fill the voids.  However, the thing you need to look out for more is “tribal knowledge”.  In other words, if you can’t survive an individual being gone because he or she is the only one that knows how to do a given task, then you need to have someone else trained to do the task, or document the process so anyone can do it.  Again, what are you going to do if this indispensable employee has a bad accident or decides to take his or her knowledge elsewhere?

So, as we see, not only are vacations vital for an individuals mental well being, they increase performance as well as provide countless other benefits to the organization.  I am curious if anyone out there actually has a policy like this currently?  Hopefully the value of this is starting to be known and we will see these policies becoming more and more common.


Teleworking and the Future of the workplace

June 24, 2008

I recently stumbled across this article about how allowing employees to telework could save billions!  It focused more on the fact of the savings for the employee, but I would argue there are also many savings to be had for the employer.  So what are some of the benefits of allowing teleworking?

Well, first we’ll look it from the employee’s point of view.  With gas around $4/gallon now, many people are spending quite a bit of money just getting to a from work.  According to the article, the average American spends around $2,000 per year on gas commuting to work (not real sure what per gallon price was used in this calculation though, maybe it’s higher now).  So if we say the average American works 240 days/year (took out about two weeks vacation and 2 weeks worth of holidays/other time off), that means the average American is spending a little over $8/day to get to work.  So if that’s true and the average American were allowed to telework just one day a week, that individual would save about $400/year.  I think that’s something most employees would appreciate.  Especially if you bump that to two days a week and now they are saving $800/year.

Now, from the employers point of view.  I’ve seen several studies (didn’t have time to dig up links, sorry) that suggested that the average individual tends to do more work when working from home than when at the office.  I would have to say my experience would support this as well.  You may think there are a lot of distractions at an employee’s home, but you have to remember there are an awful lot of distractions at work as well.  Probably the biggest distraction is just all the other people there that can walk up and ask questions or just hold general conversation.  You are putting a person in a social environment and thus they are going to act socially, catching up the latest gossip and how people are doing.  Even if the individual isn’t a very social person, he or she will likely participate in this to an extent mainly because it is the polite thing to do.  When an employee is working from home, you won’t have those distractions so they can typically spend longer blocks of time doing dedicated, heads down work.

Now lets get into the big savings.  If you can get good at this, and begin scheduling teleworking, you can eliminate the need for office space.  If you can do alternating schedules, two people can share the same office.  You can even do it so you don’t have dedicated offices, people simply use an available space when they are in the office.  One of the best bosses I ever had strongly held the belief that the work force would head this way in the future anyway…and I think I have to agree with him.  Individuals would not get provided with offices, or even PCs and things like that.  There would be office space that could be used if needed, but no one would have dedicated offices and people would work from home most of the time.

One concern I know most people may have is performance.  “How do I know they are actually working?”  The best answer I’ve seen for this is (can’t remember where this came from), if you have to ask this question, you don’t really know they are working now, you simply know they are present.  Just because someone is at the office doesn’t mean they are working either.  So if you don’t currently have any other way of knowing if someone is actually spending their time working other than knowing they are in the office, you have more things you need to work on than this issue.  This is not saying you need to micro-manage and tell them everything they need to do or anything like that.  But you should know what they are doing more than just, “yep, he’s here.”

Another similar concern people may have is, “well now that this person is working from home, how do I know they aren’t working on the side either?”  Very similar to the above answer, you need to have a way to know what someone is doing rather than just knowing they are there.  Now, if you have that, and the person is able to do everything that you want them to do to an acceptable level of quality, and they still feel they have time to work for someone else…what is the problem with that?  As long as they aren’t doing anything that would cause a conflict of interest, as long as you are getting the work you need out of them, does it really matter if someone else is as well?  The well respected manager I mentioned above saw this coming too.  He predicted that the majority of very talented individuals would end up becoming more like contractors working for many different companies because of exactly this.  They are working from home more, getting more work done and looking for more to do.

So, I would encourage everyone to consider a teleworking policy and start reaping the benefits.  I know we will!


Backups

June 23, 2008

Alright, so back on the theme of changes we were required to make because of SAS 70…a fairly major change for us was that of backups. Don’t get me wrong, we were doing backups before, and we were doing a pretty good job of it if I do say so myself. However, as usual, it wasn’t quite formal enough for the auditor’s liking and was missing some desperately needed notifications.

Basically, prior to the audit, our backup plan was the following…

  1. SQL Server did a full backup every night at midnight and transaction log backups every 10 minutes.
  2. A process on another server checked on a regular basis (approximately every 10 minutes) for new backups and would download them from the SQL server, compress, and encrypt them.
  3. This same process would also copy over the file data store associated with our application using rsync (files in the datastore are already compressed and encrypted).
  4. A process running at our main office (separate facility from our production network) would run every hour and rsync the directory on the 2nd server which contained a copy of all the file storage as well as the compressed and encrypted SQL backups.

That process worked great for us, but again, it lacked the logging the auditors wanted and also didn’t do any OS/System level backups.  Honestly we didn’t, and still don’t see a need for OS/system level backups as we use a pretty generic configuration and can have a new install up and running on a new box as fast as we could ever restore this from backup (not to mention we’ve heard very FEW instances of any OS level backup of Windows ever working very well).  But the auditors want to see it so we decided to comply and check the box.

The solution we’ve found to make everyone happy is somewhat of a mixture.  We decided to use Bacula as our backup server as it’s open source and free and seemed to have all the features we needed.  It also worked across platforms which was a requirement.  The server installation and configuration was really fairly simple following all the instructions.  We also were able to get the clients installed without an issue and OS level backups working just great (it’s configured to do an ntbackup systemstate level backup on Windows prior to copying any files as suggested on the bacula site).

For the application backups described above, we’re taking more a phased and hybrid approach, mainly using Bacula simply as the scheduling/reporting mechanism.  For instance, the SQL backup that before was kicked off by our application is now a scheduled Bacula job that runs every 10 minutes with a full at midnight.  In reality, this backup doesn’t copy any data to the backup server (it’s configured to backup some dummy directory that’s just an empty directory).  All it does is run a script before it’s “backup” that is a batch file on the SQL server that does the backup using the sqlcmd command line tool.  Based on the exit value of this script it can tell us if the backup was successful or not.

The next piece implemented was replacing the cron job in number 4.  Basically, right now, rsync is kicked off by cron every hour to copy these files to our other facility.  We will be replacing this with a bacula job similar to the SQL backup.  With this job, Bacula itself will not copy any files over (it’s just an additive data store so doing a full backup every week or even every month just seemed like overkill as the files never change, there are just new files added) it will just run the rsync script and be able to report success or failure based off the exit code of the script.

Step 3 will eventually be eliminated.  Instead of being rsynced to the 2nd server in the production network and then rsync’d to the other facility, we will just rsync straight from the datastore to the other facility, basically consumed in the step described above.  Basically, the datastore is mounted as a share drive on the 2nd server so we’ll just rsync the shared mount instead of the copy of the shared mount.

Step 2 will be the last to be replaced (mainly because we haven’t been able to test bacula’s encryption much yet).  This job will be changed to an actual Bacula job that will copy data over.  It will be similar to how it exists now, when the bacula job kicks off, a script will run which will rsync the SQL Server backup directory to this 2nd server in the production network.  Bacula will then backup the rsync’d folder compressing and encrypting in the process.  So once we have adequately tested Bacula’s encryption and restoring encrypted files, we’ll be good to go with that.

As for the backup media, all backups are stored at our office facility (where the backup server is located), the data is simply transferred over our private connection to the production facility.  Instead of tapes, we opted to use 500GB USB hard drives which we rotate on a given schedule.  Basically, we have a daily pool of drives which get rotated daily and just contain incremental backups and other backups that only need to be stored for a week.  There is also a weekly pool which gets rotated once a week which contains full application backups and differential OS/System backups which get stored for a month.  There is also a monthly pool which mainly just holds the Full OS/System backups which get stored for two months.  We do have one other pool of hard drives for the rsync of the filestore.  Since this is an rsync, we didn’t want to get any one media out of sync too much, but always wanted to be sure to have a good copy, so we decided to use 3 drives rotated on a daily basis.  This way, there are always 2 drives in the safe and one active and one drive won’t get more than a couple days out of sync.  Each USB hard drive is one volume that corresponds to the asset tag on the drive.  This way when bacula requests a given volume for a restore we know exactly which drive we need to plug in.  As far as getting the drives to mount to the correct places (daily drives need to mount to /storage/daily, weekly to /storage/weekly, etc) I did the following:

  1. Created a configuration file which lists every drives “name”, the device id generated by the system (and subsequently linked to the device itself in the /dev/disk/by-id directory structure in Fedora 8), and the mount point for that device (so if it’s a daily, /storage/daily).
  2. I then created a perl script which reads this file, checks to see if the given device is plugged in by looking for the device id in the /dev/disk/by-id directory, and if so, it mounts the device to the mount point listed in the configuration file (and then chowns to give the bacula user write access since it mounts as root).  The unmount script simply unmounts any drives listed in the configuration file that are plugged in.

This script structure is very helpful in rotating the drives as now all I have to do is run the unmount script, rotate the drives, and then run the mount script, and all the drives are mounted to the correct directory so they will get used properly by bacula.

Hopefully, this will give you some ideas on how you could use bacula in your organization.  If you’re interested in more details on how I have the USB devices working or my general configuration I’d be happy to share.  We’re currently still tweaking the schedule/priority of the jobs, but all in all, it’s working very well.  Has anyone else had any luck with Bacula or any other backup software you’d like to share?


Web Filtering/Proxies

June 20, 2008

I recently stumbled across this blog posting talking about personal web surfing at work. Basically, the blog is commenting on a survey that stated 39% of 18-24 year olds would consider leaving their job if personal web browsing were banned. For the 25-65 demographic (which is an insanely large demographic which begins to bring the validity of the survey into question), the percentage dropped to 16%.

This posting struck me as interesting since the policy of personal web surfing has come up a time or two here as we are going through and creating all these new policies for our SAS 70 audit. I think I would have to side with the 39% in the 18-24 crowd, but then again, I’m almost always against what I refer to as “blanket” policies.

  • No Personal Web Browsing
  • No Personal Phone Calls
  • All Training Cancelled Due to Budget Constraints (this one seemed to happen every year just before a training class I had scheduled at a former employer…I learned to always get training scheduled for the first half of the year)

So, if we are going to allow personal web browsing, we should at least state expectations in the policy somewhere. Even if it is as general as “personal web browsing is allowed as long as it does not interfere with your performance”. Of course, we probably want something about acceptable content as well. Maybe limiting certain types of things like streaming video and music if you have bandwidth issues.

My personal preference is to have a very flexible policy. However, I also think that a company should use a web proxy that requires a login so that a user has to login prior to accessing the external internet. Not only does this allow you to log use and give you some data to back you up if you feel personal browsing is affecting someone’s performance, the fact that the user has to login before getting outside gives them a subtle little reminder that they are being monitored. I know, maybe a little “Big Brother” for some, but if you’re not doing anything wrong there shouldn’t be any issues. If you get some more advanced proxies, they can automatically filter based of content type so you can ban access to obviously inappropriate material. Another interesting concept I’ve seen before was from a small company that just had a simple web proxy and they published the logs for all employees to see. So if John Smith in accounting was going out to some adult website during work, his co-workers could find out. Nothing seems to be quite as big of a motivator as public humiliation 🙂

So, for all of you out there considering blocking web browsing, maybe you want to reconsider if you have a large number of employees in the 18-24 demographic (or probably the 25-30/5 as well). I’d be interested in hearing what some people’s policies are on personal web browsing and how they are enforced if at all.