Hello, my name is James. I'm an IT Manager, specialising in Windows Server, Software Development (.Net) and SQL Database Design & Administration.

Recent Portfolio Items

The road to Agile

agile_development_logo2Over the last five years I’ve been part of an IT team that’s gone through a period of expansion in line with the explosive growth of the business. Originally part of a team of two, between us we ran IT Support, and delivered various software development tasks to increase or improve the capabilities of the business. There was no concept of prioritisation, forward planning or even robust requirements gathering – we didn’t need it, we just got on with the task, and it worked. It was a symptom of a then small business, needing to adapt and react quickly to improve.

Since 2011, incredible growth of the business led to two relocations, and an IT team of 2 becoming a team of 13 with the overall headcount growing from 30 to over 250 in the same time. As the company changed, so did the management structure; two business principles became a board of directors and a senior team below this. The number of stakeholders, each with their own unique set of demands from IT increased exponentially.

My role changed from support, to developer, to team lead, to manager and ultimately, I’ve been charged with delivering change to bring our department in line with the expectations of a business of this size, providing better visibility, better prioritisation and better management of expectations. The real challenge is going to be delivering this whilst still supporting the business in maintaining it’s biggest competitive edge – continual improvement and quick reaction to changing needs.

The challenge has become putting forward the right blend of requirements gathering, rigorous specifications and test plans whilst still being able to quickly get tasks off the ground and delivered in a timely manner.

Researching the various Software Development methodologies led me to Agile, adopted successfully by thousands across the world, it was clear this was something we should look at. Full Agile / Scrum would require fundamental changes to the way we work and the way we interact with the business, and I don’t think they’d respond well to ‘shock and awe’. Getting a better understanding of User Stories and Acceptance Criteria showed me that this is where we’d get the greatest impact, whilst maintaining our strong working relationship with the stakeholders. User Stories, whilst simplistic in nature, were an incredibly easy way to convey the intent and overall goal of any task.

I once read that they key to Agile was making it work for you, there were no right or wrong, nor hard and fast rules – Agile is a collective ideal, and so I felt comfortable in not putting forward adopting the full workflow.

With Management briefed, and a new work request system currently under construction, only time will tell if this yields the results I’ve been tasked to achieve…

agile_development_logo2

SQL Server Migration and Upgrade – Success

Over the past few months the IT Team have been preparing for one of our most ambitious upgrades of our infrastructure, not because of what we were trying to achieve, but the work needed to get us there.

Our newly retired database server happened to also be a file server, DNS, DHCP and WINS server as well as a Domain Controller. For most small businesses, this was reasonably normal to have a server fulfilling multiple roles – it was a bleak outlook for the team however, as because of the multi-role nature of the server, we’d have to rebuild and redeploy all of our in-house software to point to a new database. This was near enough 100 projects, services and websites needing attention.

Back in late 2011 when the server was first commissioned, we estimated it would be good for 70 – 100 users, allowing comfortably for a 100% growth of the business from the 38 users at the time. From a software development point of view, the concept of having to go through a migration to a new server is not something that was ever discussed. We were content to have our connection strings within our applications name the server directly.

Hindsight. Well… you know the saying.

Fast forward to the present, 202 active users were hitting our database, file services, authentication on a daily basis, and I had to spend a long time babysitting the server into reasonably smooth running – watching closely for long running processes and coming up with clever, slicker ways of running the queries to squeeze out that last extra bit of performance – deadlocks, blocked processes and slower responses were becoming the norm and action was urgently needed.

Our new server was racked and ready to be used some months ago, it runs SQL Server 2014, has 50% more cores, 50% more RAM and we hope will see us through another 3 years.

Our first job, learning from our mistakes of the past, was to DNS alias the server, for use with our database connections. With our alias in check, if we have to go through this exercise again, we’ll be ready.

With all of our software prepped and tested against the new database (which was pointing at our current production database), we were ready.

The documented approach to a database server migration is to detach the databases from production and reattach on the new server. For the purposes of our exercise, we didn’t exactly follow convention – that process would have been a one-way street as our databases were running on SQL Server 2008 R2 at SQL Server 2000 compatibility level, and would have been upgraded as part of the deployment.

We opted for running a full set of backups, shutting down SQL Services and deploying these backups to the new server – with the help of a few scripts I’d created for deploying development database instances, this was painless. The other handy script in my tool belt was ‘sp_help_revlogin’ which we used to script out the logins and security settings.

We switched the DNS alias to the new server, flushed the DNS caches and fired up our first application – with our fingers tightly crossed, it started and could connect!

Are you asking the right open questions?

I’ve had the pleasure of spending the last couple of days with Russ Baleson who has delivered training to our management team around more effective communication.

The content was ground breaking for me for a number of reasons. It changed my perspective on some things I always thought to be right, and it’s prompting me to reconsider just how effective my communication style has been.

Open and Closed questions were always an area I thought I knew well. Open questions lead to elaborate answers, closed questions will generally get a one word response. Simple, Right?

What do you do when your Open question gets a closed response?

That’s always puzzled me, and leading an IT team, I sometimes find it challenging to get my team members to open up – I thought I could write that off to the nature of Software Developers particularly, you can think of many preconceptions here that can be used to explain the traits and behaviours, but it’s dawned on me, it’s self-inflicted. I was wrong in my approach.

Take a typical question you might ask a Software Developer:

Q: What went well with this project?
A: We delivered it on time.

To my software developer, someone with a personality profile that is very analytical, logical and to the point, this is a perfectly proper answer, but it leaves me craving just that bit more. How did you deliver it on time? Did you pull together? Did you think you were you well supported? Just talk to me!

Let’s try this again.

Q: Tell me some of the things you think went well with this project?
A: Well, we hit the deadline, which I suppose is the most important thing, we also worked very well together as a team, it was also really good to get the support of the department manager and that certainly paid off as we knew exactly how we’d approach it.

Suddenly, we’ve hit gold – here’s a real insight into the project and the mindset. A simple change with dramatic results.

What’s the secret?

The key here is in the delivery, the wording is important as you need to make sure your intentions are clear too – I want you to talk, not just a bit, but lots. Just wording your question differently can make all the difference.

What did you do at the weekend?
Tell me some of the things you did at the weekend?

What can we do better?
Talk to me about some of the ways we can do better?

Russ has published a book on Communication Skills which I’ll certainly be purchasing very soon.

 

Looking back

In 2011, having just taken the plunge to join a long standing friend in his endeavour to set up a creative agency, I’d began paving the way for a career change that’s put me where I am today.

Embracing Social Media helped this fledgeling agency pick up new clients and I worked closely with a number of business owners and stakeholders to develop their social media presence and effectively leverage social media to engage with their prospective customers, supporters and audiences.

Back then, Social Media was a relatively new ‘buzz word’ for business, and doing it right was something that didn’t come naturally. Trying to get the point across that broadcasting your message was never going to be effective had me banging my head against the wall frequently, it seems this lesson is still as relevant now as it was then.

Personal circumstances lead to the agency ultimately closing and I faced redundancy, but a new opportunity with a professional connection led me to a new, multi-disciplinary role that developed over the next four years into leading the day-to-day operations of an IT team, supporting 175 users and the infrastructure of the market leading financial services business it has now become.

Today I’m not able to dedicate the same time and effort to social media as I used to, with my role evolving towards management of an IT department, I’ll find opportunity to talk about new and upcoming technology, and share information, tips and tricks I think might help though, so hopefully my site will find some new life.

As part of my reflection on how I got here, I decided to resurrect some of my ‘tools of the trade’, mainly to see if anyone visited my blog anymore, and was quietly surprised to see some traffic coming through.

‘Reputation Management’ – also known as googling yourself, was an important task for anyone trying to establish themselves as a credible source of information, so out of curiosity, I wanted to see just what Google held on me. I was delighted to find that some of my work had been picked up by BusinessInsider in their article Humanizing Your Social Media Efforts, the site is ranked in the top 200 sites in the world in terms of traffic volume, so huge exposure for me!

This might help get a bit more traffic to the site today, and explain why you are reading this, so I’ll be thinking of some relevant, useful and quality content to put here soon

 

Starwind Virtual SAN and HP StoreVirtual VSA – side by side

An upcoming project I’ll be involved in centres around High Availability and Disaster Recovery, and whilst Failover Clustering ticks a number of boxes on the high availability front, that does come with some additional caveats. I wrote recently about areas of weakness I’d found in a traditional Windows Server Failover Cluster, the main thing being shared storage introducing a new single point of failure.

To counter this, there were a number of possible options, at a variety of price points, from dedicated Hardware SAN devices in Cluster Configuration themselves, to software based Virtual SAN Solutions which claim to achieve the same.

This is a brief update on my experiences of Virtual SAN, based on two products, HP StoreVirtual VSA and Starwind Virtual SAN. I should note these are not performance reviews, just some notes on my experiences setting them up and using them in a lab environment.

Starwind Virtual SAN.

paste_image_1400

In it’s basic form, this is a Free piece of software, support is provided by a community forum, and there are naturally commercial licenses available. This edition can allow you quickly provision iSCSI based storage to be served over the network, but has no resiliency built in. I implemented this recently to provide additional storage to an ageing server running Exchange where physical storage was maxed out, and so network based was the only option, but Exchange needed to see this more as a physical disk than use a network share.

There is a two node license available for this, providing replicated storage across two servers running the software. This is where it provides real use, as you’ve now introduced storage resiliency given it’s available in two places. From experience, once the initial replication has taken place, provided you’ve set up your iSCSI connections and MPIO to use the multiple routes to the storage, powering down one of the servers running Starwind Virtual SAN had no impact on the access to data provided by the Virtual SAN. Once the server was powered back up, it took a little time to re-establish it’s replication relationship, but I’m going to write this off to my environment.

The software can be used in one of two ways, you can install it directly to your server (Bare Metal) or you can install it to a Virtual Machine, with Hyper-V and VMWare VSphere both possible. There are benefits to installing directly to your server, mainly being RAM usage and not having the overhead of hosting a full OS install running on a VM on top of your hypervisor. Two network connections are required, one as a synchronisation channel, ideally using a direct connection between the two servers, the other connection is required for management and health monitoring of the synchronisation relationship.

For extra resilience, if the license permits, a further node can be added to the configuration that is off-site, for Asynchronous replication.

HP StoreVirtual VSA

paste_image_1416

StoreVirtual is a virtualised storage applicance provided by HP, originally a product known as LeftHand. It is only available as a Virtual Machine and so adds some overhead to it’s implementation, using at least 4GB of RAM, which increases dependent on the capacity hosted. Supported on both VMWare and Hyper-V platforms, there is a wide market for the product.

The StoreVirtual VSA can function as a single node, and equally works in a multi-node configuration with scale-out functionality. Because it cannot be installed ‘Bare Metal’ other than on a dedicated hardware appliance, performance therefore has a potential to be slightly impacted given the overhead in the Hypervisor providing access to underlying physical storage.

In terms of management, there is a dedicated management interface, provided by installing the management tools on another computer (or VM) on the network. Here, it’s simple to provision storage, set up access control in terms of who can access this storage, and see health and performance information.

High availability is achieved not through MPIO, but through presenting a group of servers as a single cluster, this however needs to be managed by a further Virtual Machine running a role called Failover Manager (FOM), which again adds to the overall overhead of the implementation. In an ideal scenario, this would be hosted on hardware independent of the other two nodes to avoid a loss of Quorum. StoreVirtual also supports Asynchronous replication for off-site replication.

Update: for clarity, FOM is required when an even number of nodes are active, to ensure a majority vote is possible for failover purposes.

Limitations of Testing

My lab consists of 2 x HP Microserver Generation 8, both with Intel Xeon E3 series processors and 16GB RAM, both are connected to a HP Procurve 1800 managed gigabit switch. With only 16GB of RAM on each Hypervisor, it’s difficult to simulate a real-world workload on the I/O front, particularly when at bare minimum, 6GB needs to be allocated to StoreVirtual and a FOM on one of the machines, and 4GB for the redundant node on the other.

Pros and Cons

Starwind:

Pro – Installs directly to Windows Server 2012 R2 or to a VM
Pro – Relatively low memory footprint
Pro – Lots of options to tweak performance, can leverage SSD cache etc.
Pro – Generous licenses for evaluation purposes – 2 nodes (provided they are VM based) licenses are available free of charge

Con – I’d heard of Starwind before, having used a few of their useful tools, but would you trust their solution with your enterprise data?
Con – Caught up with a full resync when one node was shutdown and restarted, and it took some time to re-establish the synchronisation

HP Storevirtual:

Pro – A brand name you know, and might find easier to trust
Pro – Up to it’s 13th Version, the underlying OS is proven and stable
Pro – Intuitive management tools

Con – Must be ran as a VM, minimum RAM required is 4GB for a StoreVirtual node. A Failover Manager is required to maintain Quorum in a 2 node configuration
Con – 1TB license expires after 3 years, so for lab use, prepare for the time to come

Closing thoughts

I can vouch for solid performance in Starwind Virtual SAN, as the shared storage for my lab’s Hyper-V highly available VMs is running on a 2 node Starwind Virtual SAN. Ultimately, lack of Hardware available to perform a comparable test has meant I have not been able to use StoreVirtual to host the same workload. The licensing of StoreVirtual put me off a little, Starwind’s license is non expiring, but the 1TB license for StoreVirtual on offer is restricted to 3 years.

Once I’ve found some suitable hardware to give StoreVirtual a fuller evaluation, I’ll add more detail here.

 

paste_image_1400

Link Aggregation between Proxmox and a Synology NAS

I’ve been using Synology DSM as my NAS operating system of choice for some time, hosted on a HP N54L Microserver with 4 x 3TB drives and a 128GB SSD. This performs well and I’ve been leveraging the iSCSI and NFS functionality in my home lab, setting up SQL Database storage and Windows Server Failover clusters.

Having Proxmox and Synology hooked up by a single gigabit connection was giving real world disk performance of around 100MB/s, near enough maxing out the connection. For Synology to have enough throughput to be the storage backend for virtual machines, this would not cut it, so I installed an Intel PRO/1000 PT Quad in each machine giving an additional 4 gigabit network ports.

Proxmox itself supports network bonding modes of most kinds, including the one of most interesting, balance-rr (mode 0) which will effectively leverage multiple network connections to increase available bandwidth rather then provide fault tolerance or load balancing.

I could easily create a 802.3ad link aggregated connection between each, which worked perfectly, but serves no purpose in a directly connected environment other than providing redundancy as the hashing algorithms for load balancing will try and route all traffic from one MAC address to another via the same network port, so I set out to investigate whether the Synology could support balance-rr (mode 0) bonding which sends packets out across all available interfaces in succession, increasing the throughput.

Note: You’ll need to have already set up a network bond in both Synology and Proxmox for this to work, I won’t cover this here as it’s simple on both platforms. I’ll be talking about is how we can enable the mode required for the highest performance.

The simple answer is no, Synology will not let you configure this through the web interface, it wants to set up an 802.3ad LACP connection, or an active-passive bond (failover is in mind rather than performance). I found however that provided you’re not scared of a bit of config file hacking (well you probably wouldn’t be using Proxmox if you didn’t know your way around a linux shell and DSM is based on linux too) you can enable this mode and achieve the holy grail that is a high performance aggregated link.

Simply edit /etc/sysconfig/network-scripts/ifcfg-bond0 and change the following line:

BONDING_OPTS=”mode=4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast”

to

BONDING_OPTS=”mode=0 use_carrier=1 miimon=100 updelay=100″

Now, reboot your Synology NAS and enjoy the additional performance this brings.

For reference, here’s the output from ‘iperf’ performing a single transfer:

root@DiskStation:/# iperf -c 10.75.60.1 -N -P 1 -M 9000

WARNING: attempt to set TCP maximum segment size to 9000, but got 536

————————————————————

Client connecting to 10.75.60.1, TCP port 5001

TCP window size: 96.4 KByte (default)

————————————————————

[  3] local 10.75.60.2 port 37463 connected with 10.75.60.1 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  3.40 GBytes  2.92 Gbits/sec

Not bad?!?

High Availability and DR in SQL Server 2014 Standard

In my day job it’s part of my role to consider ways in which the IT department can work more effectively, as well as ways we can get our IT infrastructure to work better for us. A project that’s currently under way is migrating from SQL Server 2008 R2 to SQL Server 2014 Standard. The current plan is that it will run on it’s own box, and whilst it will have the horse power to deal with the load, this approach is ultimately vulnerable to a number of different types of failure that could render the database server unusable and adversely affect the business.

Part of my studies towards MCSE Data Platform involves High Availability and Disaster Recovery strategies for SQL Server, but most of the features are noticeable absent in the standard edition of SQL Server.

So, how can I work with Standard and still give us some type of fault tolerance?

I’m currently exploring either physical or hardware failover clustering using Server 2012’s built in Failover Clustering services along with a SQL Server 2014 cluster – Standard Edition, provided is correctly licensed (either through multiple licenses or with failover rights covered by Software Assurance) will allow for a two node cluster.

Windows Failover Clustering has reliance on shared storage however, thereby introducing another potential point of failure in the storage platform that would also lead to downtime.

Failover Clustering is great, but how do I provide fault tolerant storage to it?

I’ll document here my experiences with both hardware and software solutions to this.

I’m considering Synology rackmounted NAS devices in high availability configuration, but the potentially more cost effective solution is to virtualise a VSAN in a hypervisor of choice. SANSymphony and StarWind Virtual SAN are options I’ll consider. All of this will need to be tested in my home lab, which is a Lenovo Thinkserver TS440 with Xeon e3 Processor, 32GB RAM with 256GB SSD storage backed by a HP N54L providing shared storage via iSCSI – it runs Proxmox as my hypervisor of choice which is a platform I’ve been using for a number of years before Hyper-V really took off. It’s open source with commercial offerings, and uses KVM / Qemu – the solution must work here first.

I’ll post an update soon.

What to do when your SA account gets locked in SQL Server

By default, SQL Server 2008R2 when using mixed mode Windows and SQL Server authentication, sets up the SA account with a password policy, set to lock after a number of failed login attempts. This is particularly troublesome when a rogue process attempts to login with an incorrect / outdated set of SA credentials, and it’s all too easy to skip over setting up additional administrators with Windows accounts.

On my development server, where I have a number of projects underway, I naively missed the step of setting a local or domain account as an administrator, meaning SA was the only account with sysadmin privileges on the instance. This, paired with the default option of enforcing password policy on the account, meant it was too easy to inadvertently lock the SA account, losing access completely to the entire contents of the databases.

Apex SQL produce a number of SQL related tools, for the one I was trialling, one of the first steps of running their software is to setup a database connection, you enter a server / instance name, and choose Windows or SQL authentication. A helpful (but dangerous) feature is that this software appears to attempt to connect using credentials as you type, this leads to the SQL server being spammed with incorrect logins if you’re not quick, eventually leading to the account being locked.

Time to panic.

The trick, in this circumstance, is to make sure you are logged on to the server with an account with local administrator privileges. As long as you have this, you can leverage SQL’s administrative connection in Single User mode. To achieve this, shut down the SQL Server service for the instance – remember this will disconnect any / everyone on the instance, so only do this out of hours, when you have no choice, or on a server only you are connecting to.

Then, open up a command prompt with administrator privileges and navigate to the SQL executables for your instance, it’ll be something like: c:\Program Files\Microsoft SQL Server\MSSQL10_50.1\MSSQL\Binn

Run sqlservr.exe with the additional switch of -m and you’ll fire it up in Single User mode. Now, open up management studio and go and connect using Windows Authentication. With a bit of luck, you’ll be in.

Now, go unlock the SA account. You’ll have to change the password as part of the unlocking process, but go ahead and change it back once this has been completed if it’s needed.

With this complete, you can terminate the SQL Server running in single user mode by hitting CTRL + C and confirming with Y. Now, bring up the SQL Service, and normality should be restored.

sql-logo-no-version

Social Media Masterclass at Business South

I was privileged to be invited to speak at the Social Media Masterclass at Business South 2012. I spoke about Facebook as part of a panel of Social Media experts to offer some insight into using Social Media for business.

The panel fielded questions that were pre-recorded by some of the estimated 2000 delegates who attended the Business to Business event, as well as taking questions from the attendees at the session – an estimated 200 local business leaders took part at the WOW Business Growth Zone at the event.

It was a real pleasure to see my experience in the field of Social Media acknowledged by being invited to speak at this prestigious event.

Watch a highlight video of Business South here.

Business South 2012

KLM prove they ‘get’ social media

Dutch airline KLM, who you may remember me writing about previously with their ‘Tile and Inspire’ campaign have demonstrated they have a great understanding of social media with a number of successful and innovative campaigns in the past.

Their latest stroke of social media brilliance goes by the name of ‘Seatmates’ – whilst not a completely new concept, taking inspiration from Ticketmaster’s interactive seat maps, allows passengers to choose seats based on the social media profiles of those already on the flight.

The service currently works with Facebook and LinkedIn, and using the service is completely optional – passengers are of course able to opt-out of the feature, or at least restrict what information is published about them, but could prove a great way for passengers to meet and interact with people who share the same interests or other characteristics.

It would certainly liven up a 15 hours transatlantic journey knowing I could choose to sit next to someone I would share some common ground with.

I wonder what’s next for KLM?

klm
© 2011 James Coleman