The department learns from the outages earlier this semester and will analyze equipment in hopes of meeting user’s needs.
It’s been about a month since the 17,000 new devices that students and staff brought to campus this fall exposed a hidden limitation in the university’s core routers and caused a network outage that took over two weeks to resolve, and just a couple of days since a faulty firewall caused a two hour outage in the middle of a weekday.
If you'd like a detailed description of the challenges we faced during the fall opening outage, visit Anatomy of a Network Outage. It's long, but worth the read.
Now that the virtual dust has settled, I’d like to take a moment to share some of the lessons the Office of Information Technology took away from this experience.
{{tncms-asset app="editorial" id="897cfcea-6872-11e5-9303-37901aed81eb"}}
First, the Internet matters.
Yes, I’m stating the obvious, but it bears repeating: Everyone at this university depends on the network. It doesn’t matter if you are online or on campus, student, faculty or staff. If you don’t have reliable Internet, you’re stuck. An outage like this is a reality check, a reminder of just how critical IT infrastructure is to the university’s mission.
Second, stuff happens.
There are two kinds of IT people: those who have lived through a nasty crash, and those who are going to. No matter how good your plans and procedures are, something eventually is going to break. The one factor you have true control over is how you respond when things go south, which leads me to my last point.
Transparency matters.
As soon as it became clear that we had a major problem on our hands, we made the call to communicate early and often. Regardless of whether the news was good or bad, we were going to report it. If there was nothing to report, we would report that. Some of our staff were understandably nervous about this approach.
{{tncms-asset app="editorial" id="c7cb7fc2-6871-11e5-a04f-7b2e29ea70dc"}}
But a strange thing happened as we tweeted, posted and emailed the details of our marathon days and nights: People started rooting for us instead of cursing at us. Unsolicited thank you’s rolled in on email and social media. People stopped our technicians in the hallways to tell them how much they appreciated their work.
The effect was energizing. Technical staff, already stretched to the limit, pushed themselves even harder, staying on site in the middle of the night to try just one more thing or have one more look at those system logs.
So where do we go from here?
On the technical side, we will be analyzing our entire network setup to identify areas where equipment might need to be reconfigured or re-purposed to improve performance. On a broader scale, we will work with students, faculty and staff to figure out if we have the right balance of services and if those services are meeting people’s needs.
Some of those conversations will not be easy, but they need to happen.
If you want to participate, follow @OhioIT on Twitter, keep an eye on the university’s bi-weekly Compass emails and send your suggestions to cio@ohio.edu.
Let us know what you think. We’re listening.
Sean O’Malley is a spokesman for the Office of Information Technology.