ben.eficium is the blog of Ben Slack, citizen and principal consultant at Systems Xpert.
All posts are Copyright © Ben Slack on the date of publishing.


12 May 2010

Why I am against the Internet Filter

No, I'm not a great lover of pornography. Our governments should take active steps to reduce the pervasiveness of pornography in our society. It's simply that the Internet Filter is not an effective way to do it.
The arguments against the Filter I will discuss here are purely pragmatic. In summary, the Filter will inhibit internet performance and user experience and does not address the primary given reasons for its implementation. Additionally, the filter will allow for the suppression of free speech. It has already proven so in the trials conducted with a handful of ISPs, as the 4 Corners program on the Filter this week showed.

The Filter will inhibit Internet performance

The Filter does and must work on the level of domain names and URLs, the character-based addresses you type into a browser address bar and click in hyperlinks. These domain names are mapped to numerical IP addresses that can be quickly and easily changed. Therefore the only effective way to filter illicit material on the internet is at the domain name level, not at the numerical address level.
Unfortunately, the transmission of messages over the internet relies purely on the numerical addresses. Packets of Internet information must be aggregated and decoded in order to ascertain the URL. This produces a performance overhead that simply must slow down performance when implemented on a massive scale at every ISP. The performance may not be visible to users viewing Web sites in browsers, but will impact on the rapidly increasing use of World Wide Web protocols to transfer data between businesses (Web Services).

The Filter does not deliver its promise

The primary reasons given by the Government for the Internet Filter is to protect children from pornography and to stop the distribution of illicit pornography. It cannot deliver on either of these promises.
There is a proliferation of proxy and tunnelling services available on the internet to bypass school and corporate firewalls and country firewalls such as China's. Simply googling “proxy bypass” returns a multitude of options. These services will work equally well with the Internet Filter. If children want to access pornography on the internet, then ISP-level filtering will not stop them. Only software installed on the user's computer can effectively perform this function.
As was discussed in the Q&A discussion accompanying the 4 Corners program, only a tiny fraction of the distribution of illicit pornography on the Internet is via the World Wide Web, the only part of the Internet that would be filtered. Illicit pornography is distributed on peer-to-peer (P2P) networks and virtual private networks (VPNs). The only way to investigate and stop the use of these networks to distribute illicit material is to actively infiltrate and gather evidence against the people who use them for this purpose. World police forces already have large sections devoted to this purpose and are doing the best job they can. While the number of arrests and prosecutions is currently small, they will grow as police gain the experience and technical skills they need to better infiltrate these groups.

The Filter will suppress free speech

An unfortunate consequence of government censorship is that the bureaucracies established to enforce it will always be slow to react to new threats and slow to react when something goes wrong, such as the accidental blocking of a legitimate Web site. Rules and policies established by regulatory authorities over time become divorced from the values they are supposed to enforce. How often have you heard a bureaucrat state that the “rules are the rules” and that there is nothing they can do to change them. If you've ever had dealings with the ATO, ASIC or other government bodies, you will have heard words to this effect. Even when there is no logical reason to enforce the rules in a particular instance or when it is arguable that the circumstances fit the particular rule being applied. Bureaucrats will usually respond to the legitimate fear that use of discretion will endanger their employment and steer towards a restrictive and conservative application of regulations. This is why we have established bodies such as the Administrative Appeals Tribunal and various Ombudsmen, to independently question bureaucratic decisions.
When it comes to questions of censorship, bureaucratic inertia is even more problematic, as whether a particular piece of content meets the rules is often subjective and arguable. This was highlighted in the 4 Corners program with the example of the anti-abortion Web sites that were blocked due to images of foetuses. There is no logical explanation for this result, expect that a bureaucrat was conservatively applying rules in a subjective manner. This example and the general nature of bureaucratic operation means that freedom of political and general speech is endangered by the filter, to a degree where the benefits of the Filter, which are very few indeed, are outweighed by the need for freedom of speech in a democratic society.

10 May 2010

Be prepared, Web 3.0 is coming

Web 3.0 is a new buzzword that describes Web content that enables greater degrees of self-description, indexing and interactivity than is currently possible. The idea has been around since the late 90’s in the form of Sir Tim Berners-Lee’s Semantic Web and internet enthusiasts have long known the benefits of parts of the Semantic Web, such as Dublin Core and RDF meta-tagging. But when a non-geek friend of mine, @Control-Edit, tweeted about Web 3.0 the other day, I knew its time is nearing.
In brief, the Semantic Web is a loose group of standards, data formats and functionality designed to make content on the World Wide Web more meaningful and understandable to automated systems owned by search providers, news agencies, content aggregators and content specialists. More information can be found at the W3C Web site and on the ever reliable Wikipedia. Additionally, the video my friend tweeted is highly recommended.
In part, the slow take-up of Semantic Web standards has been due to the power of search engines, such as Google, in extracting meaning from plain-language text. However, most serious commentators (e.g. Florian Cramer) seem to think that as the Web grows, plain-language indexing and Google’s Page Rank algorithms will become increasingly unable to provide relevant results. Therefore, some sort of content semantics will be necessary to index Web pages in the near future.
So, what does this mean for business and Web site owners? Primarily, it means that content published to the Web or online systems needs to be published with meta-data in the formats that the Semantic Web already dictates and any new formats that arise. This means more work on the part of publishers to manually enter things like keywords, summaries and indexing information according to any number of information schemas that are going to become standard for particular kinds of content and industries. Automatic keyword extraction tools may be a useful tool in this process, but it means more work for content publishers in order to stay relevant.
A good first step for any organization is to be ensure that all Web published content is moved to the XHTML format. This will enable you to include XML formatted meta-data with your content that will be ignored by browsers but picked up by the Web 3.0 applications. The next step is to include meta-data in resource description framework (RDF) format. This is the primary means to add Semantic Web friendly meta-data at the moment, and it will become more common and necessary in the near future. At the same time, including Dublin Core meta-tags in your content is also worthwhile. Performing these three steps will make you a pioneer in Web 3.0 and will set you up with the business knowledge you need to understand and implement future movements in the Web 3.0 world.

16 March 2010

Web 2.0 for the enterprise

This is an article I started some time ago, but many of its points are still relevant and I will follow it up with an article on Web 3.0 – the buzzword of the moment.

What is Web 2.0?

Web 2.0 is a “buzz” term bandied about by internet columnists and consultants to describe the proliferation of Web-based applications that allow individuals en masse to author and collaborate on Web content. To facilitate mass appeal, no knowledge of underlying Web technologies is necessary to use these applications. Well known examples include Wikipedia, Flickr, YouTube, LinkedIn and the various Web-log (“blogging”) applications, such as Blogger, LiveJournal and MySpace.
People want, and some organizations have already implemented, some of the functions these sites offer for the corporate intranet and possibly for customer and supplier use on your internet site. So, it’s important that IT departments understand important Web 2.0 concepts, what changes might occur with current intranet and knowledge management practice, and how Web 2.0 will to an explosion of intra-corporate content that will need an upgraded infrastructure to deliver its benefits. In addition, you may have to look into enterprise search functionality to integrate different applications and facilitate easy navigation of content.
From a technical point of view, Web 2.0 applications are really no different to any other Web application used for content management (CM). These have been around since the very beginning of the Web, which is why you’ll probably get an incredulous look if you use the term “Web 2.0” around technical IT types. However, the size and momentum of blogging and the content communities, such as Wikipedia and YouTube, is a phenomenon of internet use that justifies a name, and like it or not, “Web 2.0” is that name.

Web 2.0 concepts

Blogs

By the law of averages, there are almost certainly a number of people in your organisation who maintain personal blogs. For the uninitiated, this is an online diary. A blog home page mostly displays diary entries several at a time in reverse chronological order. There is usually a link to an archive of all entries. Often a blog includes the ability for readers to leave comments about each entry. On popular blogs the reader comments will be far more voluminous than the actual diary entries.
Blogs have proven a popular and valuable communication tool on the internet for individuals to publish opinions and exchange ideas. The applications for the enterprise are many. A project manager with a geographically distributed team may decide to communicate project progress in a blog format, using the comments feature for discussion of risks and issues. A CEO may use the blog format on your internet site to highlight new product releases and communicate with customers. Jonathan Schwartz, CEO of Sun Microsystems is a good example.
You probably already have a Web content management system (CMS) for internet and/or intranet publishing, in which case you’re halfway there. But it can be difficult to explain to content publishers that adding a blog entry can often be done with an existing CMS and you don’t have to procure a whole new blogging application and infrastructure. Non-technical people tend to see Web pages, blogs and other Web applications as completely different beasts. Sometimes they are, but in this instance, it is not necessarily the case. Of course, in some cases, it may actually prove more effective to install a separate blogging application.

Content syndication

‘Atom’ and ‘Really Simple Syndication’ (RSS) are services that allow a Web site to push or feed content to a person, providing he or she has a reader application installed in their client computer. For text content, these are mostly used by news sites. However, it is increasingly common for corporate Web sites to provide feeds for customer communications. Other options for future use include using these services as a cheaper alternative to SMS when communicating with a mobile workforce.
The most bandwidth intensive use of syndication on the internet is the relatively recent podcasting (audio) and vodcasting (video) phenomenon. In late 2005 the Sydney Morning Herald reported that Australia’s ABC was experiencing over 300,000 downloads a month from their radio podcasting feeds. This has grown substantially in the last 5 years. Obviously, corporate Web sites will never serve as much audio and video content as a broadcaster. However, demand to podcast and vodcast presentations, product launches and all manner of other content will come to nearly every enterprise in time, if not already.

Social networks

The original networking sites were the dating sites, which were among the first web applications. More recent innovations are the business connections sites, like LinkedIn, where a person maintains a home page and a list of connections to other known users of the network. You can recommend those in your network or search through degrees of separation for people with the skills you need. YouTube and MySpace also have social networking functionality. Facebook and Twitter have become ubiquitous in a short period.
This kind of application might be very useful for large global businesses. Business services firm that need to quickly bring together people with diverse experience for projects anywhere on the globe would clearly benefit from an enterprise application of social networking. Many of them, such as Deloitte, have implemented social networking internally already.

Content communities

These sites are devoted to a particular creative art or area of knowledge and include YouTube for short-film producers and copyright pirates, Flickr for photographers, Writing.com for creative writers and Wikipedia for just about everything. The “wiki” genre of application allows collaboration on individual items of content.
Microsoft SharePoint has been widely implemented to provide some of the functions of content communities. However, its focus on Office documents does not really allow for the dynamic collaboration that wikis do.
Wikis are a great knowledge management tool for quickly and easily documenting business practices and the personal knowledge of your staff. If you aren’t already using wikis internally, then you should consider it, as they have great benefits for creating organizational knowledge and documenting practices for succession planning and business continuity.

Conclusion

Web 2.0 technologies can add a great deal of value to the enterprise if implemented with a strategic plan for well-understood and defined purposes. When selecting what technologies to implement, always consider the business needs that the technology will address. Implementing something just because a lot of people are clamouring for it is not a good idea. Depending on your network capability and topography, using existing internet-based services may be a cost-effective and efficient way to deliver the benefits to your staff. On the other hand, using internally hosted applications may better suit your security and internet-use policies.

08 February 2010

Too Much Security Spoils the Enterprise

There, I've gone and said it (though, I'm not the first). Beware, if you're a security professional or consultant, you may find this article blasphemous. If you're a manager who wants to cut costs and improve services, then you'll want to read on.
I believe that organizational security is to the post-9/11 world what ISO 9000 was to the 90s. Applying the Pareto principle, in the 20% of cases where it is implemented correctly, it will streamline operations, reduce costs and provide better physical & data security. In 80% of organizations, new security practices will be implemented for bad reasons and will make the organization more bloated, bureaucratic and less secure as a result.
If you have heard anything like the following statements with regard to a security initiative in your organization, then it is quite likely that you are going to be in the 80% that is doing this for the wrong reasons and will end up with the wrong results.
  • “Everyone else is doing it, so we should do it.”
  • “Our most important customer requires us to do it. Just get a consultant to come up with something, will you?”
  • “We've got to implement all this new security for privacy legislation, let's just do it for everything.”
  • “Our COO/CIO got sold this security methodology by a consultancy. Now she's hired them to just whack it in without consulting anybody else.”
  • “We don't have time to do requirements for our security initiative, just get in the best name in the business.”
  • “Our website/system/database got hacked. Fix these holes and do it quick.”
  • “We're small but experiencing rapid growth – we'll need a robust security methodology like ISO 27002 to see us through.”
  • “We need to protect every piece of information in the business – I don't care about the cost.”
We can rationalize these statements into 5 core bad reasons to implement a security initiative.

1 - Follow the leader

Always a bad reason to implement anything, yet this is one of the most common reasons that any IT project gets up in an organization. Often driven by fear of what the competition's doing and hype from vendors and commentators, the herd mentality is rife in enterprise IT.
The answer is to always assess your organization's requirements in light of current and future business practices. What benefits does the service or product deliver? Will it save time or money? Will it fit right into your business, or will you have to implement a program of ancillary services in order to get it to work? If you can't get concrete answers to these questions and the best reason you can come up with to implement a product or service is that “everyone else is doing it”, then you shouldn't do it.

2 - External requirement to implement

When a customer or a key stakeholder requires us to implement new policies and procedures, we usually resent them, often because we don't understand the need for them. While externally generated requirements can be a good push to action, a golden rule is “never implement anything unless the need is real, well understood and will benefit the organization”.

3 - Getting sold

This is not always a bad thing. Consultants, IT vendors and standards organizations are the generators of new ideas for managing IT and often their products and services are capable of delivering the promises they make. However, buying the first service that comes along without investigating your organization's unique requirements and comparing other services and products in the market will almost always end in implementation of a “white elephant” system.

4 - Reactive response

The nature of managing IT systems means that your department is often under-staffed, overworked and just managing to cope with day-to-day business. There is usually little time or availability for big projects with no immediate return. Therefore, unfortunately, most security is implemented in a haphazard, reactive manner with no over-arching strategy or co-ordination. In the long-term this results in fragmented, often duplicated security measures that are not well documented or understood by your staff.
The answer is to make the time and people available to implement security at the appropriate level for your organization's IT structure. Having a chief security officer (CSO) with a documented job specification and KPIs should be the first step. Ensuring that they have the authority to enforce their policies is the next. This doesn't mean I'm advocating a purely centralized IT structure. It just means that you need co-ordinated planning and documentation of security practices.

5 - Over-emphasised risk

If your systems were to be hacked and, for example, your HR policies were changed or stolen, does it really matter? Really? While it may be annoying, the cost of fixing the policy pages on your Intranet would pale in comparison to implementing best-practice security to ensure the likelihood of this happening was incredibly small. Is the extra cost worth it? Judging by the number of organizations that have secured their entire private network with the most robust security, the answer would seem to be yes. However, I question whether this approach is really providing value-for-money for your organization.
To me a better solution would be to perform a comprehensive review of your data and classify it into several categories of required secrecy and therefore required security. You should then aggregate the content by classification into different areas of your network with different security. Yes, this is harder than just telling someone to “just secure everything”, but this kind of analysis and planning has the capability of saving buckets of money, that could be better put to use in providing better functionality for your internal users and external stakeholders.
Additionally, you need to ensure that the security measures you implement are the right fit for your business. Gaining ASIO T4 or ISO 27002 certification may look good on your product brochures, but unless you are a defence contractor or dealing with high-level government secrets, is the huge cost actually worth it? In most cases, even in financial institutions, the answer will be no. A better way is to look at the ISO certification and implement the pieces that fit your business. Unless your customers require actual certification, the external auditing process is usually a waste of money.