Architecture Chat Tomorrow (Thursday)

The fortnightly Architecture chat is tomorrow (23rd October) - 11:30am at Garrisons in Sylvia Park.

This week I'm going to leave it very open - some quick thoughts on possible things we could talk about include:

  • Silverlight 2 RTM released.
  • ASP.Net MVC Beta Released.
  • Building design/graphing/modelling tools with WPF and competing technologies.
  • The current financial situation/outlook and it's effect on Start-ups, SaaS etc. - which ideas have merit, what untapped markets offer revenue potentials and the harsher criteria being applied to new ideas (ie. ability to provide imediate cost
    savings).


If anyone else has any suggestions feel free to leave a comment or message/email it to me directly, otherwise I'll see you all there tomorrow.

Links to write-ups for previous chats, and information on the location etc. can be found on the wiki.

Read More

Moving to HyperV

I decided recently to upgrade a couple of commodity hardware staging boxes that were pushed to capacity (and had no viable upgrade path) which I use for hosting Virtual Machines - after a bit of umming and erring on what would be cost effective I ended up
opting for 2 x the following kit:

  • Q9550 Processor (2.83ghz, but they'll probably end up being clocked around 3ghz or a little above, once I'm happy the boxes are stable).
  • Asus P5Q-EM Motherboard (cheap, on-board graphics, supports 16gig DDR2)
  • G.SKILL 16gb DDR2 kit.


As drop-in replacements these worked out really cost-effective, with the ram being the most expensive part at just under $800 for the kit.

So for around $3K NZ I end up with 32gig of ram and 8 reasonably fast cores - not bad, plenty of bang for buck compared to picking up an equivalent server-spec hardware - and I can avoid the more expensive FB-DIMM (which would have worked out at least $3K for the
RAM alone).

I decided to go with HyperV on these new machines, replacing my existing VMWare Server setup... It's actually been reasonably painless to port my the various machines across, once I got a handle on a few little gotchas:

So for Windows 2K3 virtual servers I've had to:

  • Uninstall the VMWare tools (via add/remove programs).
  • Uninstall the VMWare video driver (from the device manager).
  • Run the VMWare cleanup utility (I had to do this on a few machines to clean up a "VMWare memory service" that was failing to start after moving the machine to HyperV) - you can grab the tool from the bottom of this KB article for cleaning them, which works on VM's as well as hosts... in one case I also had to do some registry
    spelunking to remove the errant service.
  • Install an IDE drive (doesn't matter what size, I just set it to the smallest possible value) in the VMWare server - see here for details - not doing this will result in a BSOD if your VMWare server is using virtual SCSI drives, like mine
    were.
  • Shutdown the Virtual machine.
  • Convert VMDK VMWare hard drives to VHD's using this tool (ignoring the IDE drives I added).
  • Create a new machine, add the drives back in (as IDE drives) in HyperV.
  • Boot it up.
  • Insert the Integration Services Disk (CTRL+I) - let it do it's thing, and reboot when prompted.
  • All done.


For the few Ubuntu & Debian virtual machines I just:
  • Uninstalled the VMWare tools, if they were installed (I don't always bother with Linux, most of the machines aren't disk or network intensive, so it never seems to make much difference).
  • Convert VMDK VMWare hard drives to VHD's using this tool.
  • Created a new machine, adding the drives back in (as IDE drives).
  • Removed the integrated network adaptor that's added by default and replaced it with the "Legacy network adaptor".
  • Boot it up, quickly hit "e" and the edit the grub boot line replacing "root=/dev/sda1" with "root=/dev/hda1".
  • At this point it should finish booting (otherwise it'll hang trying to boot and eventually dump you into an (initramfs) prompt).
  • Log in, locate "menu.lst" (normally in /boot/grub/) and change references to /dev/sda1 to /dev/hda1, save and reboot (so the change of hard drive for the root filesystem is persisted).
  • Optionally install the Linux integration components, though it takes some trial and error to get this working (I'm still fiddling with this a bit when I can be bothered - once working that should allow the virtual machine to stop using the legacy network adaptor and provide improved disk I/O & Network performance).


And for the Windows server 2008 virtual machines I skipped the addition of an extra virtual IDE drive prior to conversion, as it just seemed to work (i.e. no BSOD).

So far it's been reasonably painless process... the best part is I now have a plenty of room for some additional build servers to target my own open source projects, host some examples of RESTful/OAuth services etc.

Edit - Feb 1st 2008:  It should be noted I since moved over to using VMWare server version 2, because I found the performance of HyperV to be diabolical for CPU-intensive tasks i.e. the server would be utilizing only about 10% of the available CPU resources when running continuous integration builds, resulting in builds taking well over 30 minutes instead of the usual 2 or 3 - at the time I found a few other users who suffered from the same problems, but no solution.

At first I thought it might be a result of converting the VMWare machines to HyperV, but freshly paving new build servers didn't make any difference.  Moving over to the VMWare server v2 yielded very high CPU utilization, with builds that used to take 3 minutes on my old build server, nowtaking less then a minute on the new host machine :)

My medium-term plan is to migrate over to ESXi, once I've sourced some Intel server gigabit NIC's, using iSCSI on my openfiler box for storage.

Read More

Architecture Chat #35

After a general catch-up on what everyone's doing and introductions from a graduate new comer (Nick Irvine) - we launched in talking about robotics including:

  • Automated Kiwifruit picking robots.
  • Fruit laser bar coding and it's lack of uptake so far.
  • The possibility of per-ordered fruit i.e. you identify how you want your fruit, and it's picked at the precise moment when it fits the consumers needs.
  • The user of heuristic markings for fingerprinting of individual fruit (ie. the Idea that nature, by design, provides unique identifier for each piece of fruit, or that we could mark a fruit for identification that wasn't detectable/displeasing to the human eye.
  • Smart cars, self navigating cars, convoy or drafting applications for self-navigating cars and how these systems would deal with disconnections or extraordinary circumstances.


As an offshoot of self driving cars peter talked a little bit about Scribe (or livescribe - pens that record what you write and say - in unison - and allow playback or online publishing ) - and the future of pen based capture devices and note taking ie. evernote etc.

We talked around Multi-dimensional separation of concerns, and the idea of having both distinct dimensions (that may not be based on a single physical AOP approach) and modules of concerns, and the challenges/opportunities/solutions these "hypermodules" could provide to every day business problems - this also hi lighted the pitfalls of existing AOP approaches which often let you assemble incompatible/incorrect sets of concerns, which modules could help prevent...

I talked briefly around REST and the concept of a generic RESTful application development platform that I've been prototyping lately (like dream, but a little more resource and query oriented, and of course with OAuth support OOTB) rather then re purposing an MVC framework or using WCF (which also feels like a bad fit) or ADO.Net Data Services.

Last of all we talked about Mass Transit just as we were breaking up - Jamie noted he'd been working on a similar project (but for java?) while at Auckland Uni.  Perhaps I'll have a more in-depth report on it next time as I'm currently experimenting with it at the moment when I get time.

Thanks all for coming - write-up's of the previous chats can be found on the wiki.

Read More

2008-10-08 - Upcoming Architecture Chat

Hi All,

The Architecture Chat is Tomorrow - or probably "today" by the time your read this - Thursday 9th October, 11:30am, Garrisons, Sylvia Park.

Some topics that have caught my eye since last time include:


If anyone else has some additional topics they'd like to discuss (or raise in absence) then just leave a comment on this post or send me an email.

See you all tomorrow.  And remember newcomers are always welcome - see the Wiki
for details on location and write-ups from previous sessions.

Read More

Moving to Orcon LLU - bumpy ride so far

As some may have noticed, this blog has been up and down like a
yo-yo for the last two weeks.



The issues are caused by the fact that I host it off the home
office connection - which by and large has been solid as a rock for
the last 2 years, considering the amount of traffic the site gets
is fairly minimal.



I opted to be one of the first guinea pigs to try the LLU (Local
Loop Unbundled) offering from Orcon on the home office - alas it
has not been a pain-free experience.  I think google analytics
tells the story best:







The connection got switched to LLU on around the 21st / 22nd of
September - at which point my connection became a bit erratic -
then it started working well enough - before going wonky again, and
then completely falling over altogether on the following Saturday -
at that point there was no sync.



Around Wednesday sync was restored, then Auth as well on thursday
(so 5 days without any access) - the connection was looking ok at
that point, but then a day or so later I started noticing that it
was disconnecting every 5 to 10 minutes, then taking another 20 or
so seconds to re-establish the connection, that's still happening
as we speak - though they helpdesk has now raised the priority of
the ticket to urgent in the hope that it might be resolved
soon.



So far I've been a pretty disappointed with the alacrity of Orcons
resolution process and the lack of direct contact that can be made
with the LLU team.  Normally with Orcon issues get logged with
the help desk, a ticket is raised, and after a couple of days you
end up dealing directly with corporate support (i.e. people who
know what's wrong) - this process works well, and seems to filter
out those people who actually have problems related to their own
hardware or lack of knowledge.



Not so with LLU, you raise a ticket, but the LLU team can't be
contacted directly, and as happened with me - I had no connectivity
for 5 days, and also nothing done about the issue, it just sat in
the queue - and the helpdesk did not seem to know any more about
the issue then I did - I end up feeling bad having to pester the
helpdesk every day to find out the progress on the issue - and they
keep feeding me the same line of "the LLU tech will call you once
the issue has started being worked on / resolved, I'm sure they'll
do a card reset 1am tomorrow" - So far nobody from Orcon has ever
called me (and I'm not actually sure they ever did do a card reset,
but I suspect that's probably not the issue either).



Now I'm not blogging this out of Venom - I mean all said 'n done I
am guinea pigging the service to some extent and I know if I'd just
stuck with my previous telecom/Orcon mix none of this would have
happened - and the @Orcon folks on twitter have offered to help
push the ticket through quicker which was nice (incidentally ISP
available on twitter == good) - but still I just hope the issue
resolution process becomes a little more robust, and I thought this
might be interesting to anyone else out there considering jumping
on the LLU band wagon.



So my peeves so far are:

  1. Orcon don't actually seem to be doing any follow-up - I had
    to do all the calling (and sitting in the queue) incidentally,
    3pm is the perfect time to call their helpdesk, it's never
    busy.

  2. The LLU team can't be contacted - that bugs me - pestering
    the help desk is counter-productive for days on end, when the
    issue is sitting with the LLU team.  Being able to check the
    ticket status (with notes etc.) on-line would be have been nice
    as well (and saved some phone calls).

  3. The helpdesk doesn't seem to even know when the LLU team will
    look at an issue.  I got the distinct feeling the help desk
    guys knew as little as I did.

  4. No weekend support - I lost connectivity early Saturday, but
    the LLU team doesn't work on the weekend (or so the helpdesk told
    me) - this sucks - telecom have engineers working weekends,
    especially as Orcon are responsible for phone issues as well, I
    could end up with no internet or voice for an entire weekend -
    something I've always found Telecom very quick to respond
    to.  I would hope this only applies to home customers!


Hopefully these (and the technical issues I'm having) are all
just teething problems and will go away as the LLU roll out
continues - in the mean time forgive this blog for being partially
unavailable, and my apathy in not bothering to move the site to
dedicated hosting (which I do plan to do sometime this year,
probably along with a change in blogging software - but I'm too
busy with other things at the moment).
Read More