Technology at Geneseo Community School District 228
There are multiple ways to use YouTube. You can subscribe to channels you can comment and join the YouTube community or if you are like the me you may just want to watch the video. This applies to schools and educational settings where teachers and students find value in watching certain videos but do not need comments or suggestions for other videos / in fact these things actually distract and more importantly slow things down. The fix for this has been created by Google itself and is called YouTube Feather.
Click here to Access Google Feather and check the box to participate in the Beta Youtube Feather
Feather works by removing google + integration, removing comment section, removing suggested videos, and by defaulting to playing mid level video quality based on network connection. It also minimizes advertising – overall it returns to the simplicity and core of what makes YouTube great.
Again Feather is great for use in Educational settings where users post inappropriate comments or where suggested videos that pull up with the content are off topic or just simply inappropriate. Lets hope Google does not kill this beta and keeps it going.
Over the weekend I changed out a battery on a HP MSA20 Raid device that houses data for Windows Active Directory based student directories as well stores copies of backups for a few servers. The system has functioned well over the last couple years but the battery failure caused data corruption on the that RAID channel. Luckily this data was easily restored from another source but I find it a design defect that a battery would cause the Raid to corrupt. Most Raid devices simply revert to a slow data mode disabling the write cache when a battery fails, this allows the device to still function and will go back to full speed when a new battery is installed.
On the back on of the unit you will see an digital panel indicating an error with either a F1 (lower batter failed) F2 (upper battery failed, or F3 (both failed).
The upper batter can be replaced by powering off the unit and removing the left side array. Inside is a battery back that takes a bit of fiddling to get removed from its casing.
I had to remove the unit from the Rack and remove a screw which held the left side Raid channel intact.
After Replacing the unit I had to hold the Power Button on the back and wait as unit did full boot. Battery pack was around 25$ .
Apple iTunes version 10 and 11 do not have the ability to import Windows Media Audio files or .wma files. If you are in the progress of migrating form a PC to a Mac this can be problematic if you have used Windows Media Player, the Zune or other windows centric devices and software to manage music.
Since iTunes cannot natively import these files you will need to use a third party tool to convert the audio files into a good format.
I found this program MAX, Macintosh Audio for OS X.
This program is free and works well. It can convert multiple audio formats including wma.
To convert a WMA music collection to Mac I would recommend using the MP3 encoder since the MP3 extension does not contain any DRM restrictions.
First here is the Application Max in Zip format. Simply copy the Max application inside to your Applications folder.
Launch Max then click on on Max | Preferences. Here you can choose an output format to convert files to. In the bottom box choose MP3 then in the top box click the +.
In most cases there is no need to raise the bit-rate higher then 192.
Now make sure the MP3 box is checked and then simply click on File | Convert. You can select multiple folders with up to 100 songs at a time. By default the program will copy your converted songs directly to your Music folder on the Mac side.
Thats it, nice easy useful tool
At Millikin Elementary there is a small stack of servers (4 Xserves and 1 Xserve Raid) that handle all Mac based student logins and home directories for school district. Theses servers replicate an Open Directory that has accounts for all students K-12 and connects them to a XServe Raid for mobile home syncing. Over the last couple of days I have upgraded these systems and have made some changes to how they function. The first upgrade was a memory upgrade in the two AFP servers that each connect to one controller in the RAID Device. Machines now have 12 gigs of Ram up from 4 and can consistently maintain a throughput of 140-170mbps per second on the AFP share volume. Machines can easily handle 100+ simultaneous active connections on each respective channel on the RAID Device. With the setup having home directories sync at login and logout we will be able to have more than 800 students simultaneously logged student lab macines which exceeds the number of traditional Lab computers we have. These servers are 2006 intel Core2Duo Xeon based XServes that went into production in 2007. These machines in addition to having the Ram upgraded have some new drives and a new OS. The operating system has been upgraded from OS X Leopard Server to OS X Snow Leopard Server.
The setup at the school district is that all lab and student machines will be running OS X Lion (10.7.4) and the servers with the AFP shares and Open Directory will be on 10.6.8. After reading numerous reviews, comments and newsgroups info online I decided against OS X Lion Server at this point, it appears that 10.7 server OS is to be perhaps a transitional product from Apple where as 10.6.8 is quite stable and refined (more important for a server that runs 24×7).
Student accounts at the K-5 level will now no longer sync entire home directories with these servers. Each Elementary iMac lab has been set for the last 5 years with assigned computers for each student. In this scenario there is no need to send network traffic back and forth so these accounts are now in groups (k-2) and (3-5) which will only use the servers for authentication and a basic template to create a home directory structure. The effect will be extremely fast login times with little to no impact on local network traffic congestion. With 900 iPads already deployed in the K-5 schools reducing unnecessary network traffic is a plus.
(Opening 1 of 2 2006 XServes to add Memory)
We have an 8 Core XServe which was one of the last models of this server before Apple discontinued the line. This server houses our Podcast’s which are run using Podcast Producer / Podcast Capture and also houses the cloud storage data for a series of simple applications I wrote. This server also runs Deploy Studio and stores all images for Netboot deployment. Purchasing from Apple can be a bit pricey and when this server was purchased we had it preconfigured with 3 gigs of RAM which seemed adequate at the time. In fact 3 Gigs was adequate for running Netboot on this server and some podcasts here or there, fast forward to now this server is utilized heavily, increased podcast activity, greatly increased use of AFP, NFS and SAMBA files shares were taxing the server’s memory. A quick stop at crucial.com and this server is now running with 12 gigs of RAM (cheap under 200$). The Server has available capacity for up to 64 gigs of RAM but I think quadrupling the memory at this point will ease the bottleneck and allow it to function efficiently.
12 slots of memory total using six 2 gigs sticks DDR3 8500.
Response time of our Podcast repository greatly increased take a look here http://gcsdpodcast.org:8171. Noticeable speed increase in download and access times for these podcasts accessed through our iTunes U site as well. You can view Geneseo CUSD 228 iTunes U site (new) from here http://itunes.apple.com/us/institution/geneseo-c-u-school-dist-228/id506398882
Greatly increased processing speeds for submitted podcasts using Podcast Capture / Podcast Producer.
Better Performance on Cloud File Storage Apps. (Northside Share, Millikin Share, Art Share, Music Share, etc)
In a continuing effort to reduce the number of physical servers and move existing servers into flexible moveable vhd images running in Hyper-V -, I reconfigured the Destiny library catalog system server(http://www.geneseoschoollibrary.org). Before installing the Windows Server 2008 I created a VHD image of the existing system and then loaded it as a Hyper-V image on another Hyper-V host. Server is a Poweredge 2950 III with two Xeon chips totaling 8 logical cores. Upgraded RAM to 16 gigs and enabled VT-X virtualization in bios.
This server is 4 years old and I decided to replace the Hard Drives with Western Digital RE4 drives and have them run in a RAID 10 configuration.
Copied Destiny library back to its original server now running as virtual machine with plans for the machine to host 3 other virtual machines.
There is one last older Pentium III based server left in the district and will plan to move it to a vhd image and run on this hyper-v host as well.
Apple’s new MacBook Air is really a device I am impressed with. An i5 processor with Solid State Drive (with Trim support) small form factor and good battery life machine is really the future of laptops or as they are newly being categorized as ultra-books. The biggest limiting factor is attempting to manage or deploy large numbers of these devices is how to image/re-image the machines quickly and efficiently. Without Target mode, firewiree – the MacBook air features a new feature that may in fact be the most efficient of all – that is wireless netboot.
Using a 10.6.8 based server running the netboot service – and partnering this up with Deploy Studio (used Stable Build Version 1.0 RC 130 Oct 2011) you can wirelessly netboot and image MacBook airs as Mac’s or PC’s or both Mac / PC’s dual boot, all from a wireless netboot.
Using this setup it is possible to re-image and setup 25-30 MacBook Air’s simultaneous off a single wireless access point – removing really the only major hurdle to large scale deploying and management. Seriously when you combine the speed, size and overall look of a MacBook Air I would look for the competition to start copying this design immediately. The advanced EFI firmware features of enabling wireless before booting I would also expect will be added by other major computer manufactures. Yes the re-image process is slower over wireless but still is better then trying to use a USB to ethernet Adapter on large numbers of machines.
EFI Wireless with Netboot Server
Wireless Netboot with Deploy Studio
I am on pace to replace all network switches in the district by Christmas Break and have all end points capable of 1000mbps connections / or with Wireless N connections over 100mbps at a minimum. The High School has now been finished and I will start on the Middle School this week or perhaps early next week. Switching from a Catalyst 3500 XL to a Catalyst 2960G with all gigabyte connections has a very positive effect on overall networking. Considering all of our desktops, laptops and wireless access points can connect to at the 1 gbps speed or close to 240mbps with wireless N network is running smoothly. Installing the switches has so far gone smoothly, the only major hiccups is reconfiguring all switch to switch connections as dot1q (old switches use ISL – no longer supported) and have to order to LC to ST mx fiber cables as some of the fiber drops are SC and others are in the ST form factor .
I have also reconfigured the existing 3560 G switches that I installed last year to have all 10 GB fiber connections in dot1q. There is a noticeable speed increase if all ports and paths operate on dot1q rather than have some ports switch back and forth between VLAN protocols.
I also replaced some Netgear 24 port 100mbps switches that were used in labs with newer identical models but their 1000mbps version. (2 Switches at Southest, and 2 at the High School).
(10 year old retired switches at HS)
While studying some Apple info I came across a little tidbit about the default OS kernel being 32 bit versus 64 bit on 2009-2010 models. I had assumed that with Snow Leopard and with the 64 bit capable processors Intel Core 2 Duo’s that the default had moved to 64 bit on machines 2009+. This turns out to not be the case, only MacBook Pro 2011 models default to the 64 bit kernel, previous models in 2009 and 2010 can run the 64 bit kernel but you must first make a change to the boot.plist file to make the change permanent.
To see if you are running the 64 bit kernel extensions go to About this Mac and look. Click on the Software Overview Column on the left then take a look – you can see my 2010 MacBook Pro was set to No.
sudo systemsetup -setkernelbootarchitecture x86_64
To change use these extensions you can type one command in terminal to make the change permanent.
You can also simply test the 64 bit kernel by rebooting your machine and as soon as you see the grey screen hold down the 6 and the 4 keys simultaneously. (Can Boot 32 mode by restarting and holding 3 and the 2 keys). This change will only hold until the next restart. The terminal command above make the change permanent.
Why change? Well if you are running 4 GB or more of RAM the 64 bit kernel is needed to properly access the memory (despite Apple computers utizling RAM differently than Windows OS this still hold true for the most part). With Apple hardware moving to the 64 bit kernel by default with 2011 + models looks like Apple is going full 64bit from here on. Will you see any major performance gains – not really major but running 10.6.8 with the 64 bit kernel does seem to be slightly faster on my machine.
(After the change)
I noticed that many users between 10.6.3 and 10.6.4 got stuck in a holding pattern where their automatic update would fail. Not only would this update fail but if you download the combo 10.6.6. update this would also fail. A quick look in the console reveals that the problem results because too many files are open. Since 10.6.6 is a huge update you can imagine it probably will open a huge number of files but knowing what your system is set too is not very straightforward.
On all of the problematic machines running this command in terminal
sysctl -A | grep kern.maxfiles
Showed that the maxfiles were set at 2,000. This number is too small to complete the update and I believe the default is supposed to be around 10,240. With the newer MacBook Pro’s this number can be set much higher – setting the number to 20,480 seems to work great and all updates should process normally.
To set this number on a Mac, open terminal then enter
sudo sysctl -w kern.maxfiles=20480
Once this is done simply restart the update process and it should work as normally. How does this happen? My best guess is that machines which were originally 10.4 that were upgraded to 10.5 and then upgraded again to 10.6 maintained the smaller number of max open files (2000). This would make sense since at the time the machines were shipping with much less memory – with newer machines with 2 GB or more this restriction is no longer needed.
Here is a screenshot checking max open files on my machine – notice that mine was set for 12228 – and I have not had trouble with any updates. As I stated earlier on machines that failed updates earlier which were units that have upgraded from 10.4 to 10.5 then to 10.6 the number was set at 2,000.