Blog

How to Run a Code Club – We’re Hosting a Volunteer Session at our Offices

Code Club event at NewRedoWe’re hosting a Code Club community training evening to allow anyone who is thinking of becoming a Code Club volunteer an opportunity to find out more about Code Club and get an insight into what to expect and how to get involved. Linda Broughton is the Code Club Regional Coordinator and will be running the session and doing all the talking. The straight fact is that Leeds doesn’t have a Code Club running in every school and it should have. The only way that can happen is with more industry volunteers.

PLEASE REGISTER & COME ALONG

Code Club is a nationwide network of after-school coding clubs for children. All the clubs are led by volunteers and this training is designed to give volunteers all the information they need before they start running a Code Club.

The training will cover:

  • What is Code Club?
  • Volunteering with Code Club
  • Working effectively with schools
  • Safeguarding
  • An introduction to Scratch & the Code Club projects

Who is the training for?

  • Anyone who is considering running a Code Club and would like some more information before they sign up
  • Anyone who has signed up as a volunteer but has not yet started running a Code Club

Volunteering for Code Club will be the best thing you do this year. Guaranteed.

Building a Test Email Server

This post explains how to set up an email server on GNU/Linux that can be used for testing applications. It allows you to test that emails are correctly addressed (except for BCC) and allows you to receive the emails and view them in any POP3 email client just as if they’d been received normally.

These instructions assume Ubuntu 12.04 LTS, and that we are installing our SMTP and POP3 server on the same computer that is running our application, and that the application uses the local SMTP relay.

First, install postfix, which will be your SMTP server.

sudo apt-get install postfix

When asked what type choose local only, it doesn’t matter what you choose as a domain name.

Next, add the following line to /etc/postfix/main.cf:

virtual_alias_maps = regexp:/etc/postfix/virtual

This sets up our use of virtual alias maps which we’ll use to redirect all the mail in the next step.

Then create the file /etc/postfix/virtual and add the following line to it:

/.*/ mailsink

We’ll need a user creating for this mail dump account so run the following:

sudo adduser mailsink

Be sure to give it a reasonable password as this will be used to access emails later.

You may also wish to block spam if the server is public facing, in which case add the following line to /etc/postfix/main.cf

smtpd_recipient_restrictions = reject_rbl_client zen.spamhaus.org, reject_rbl_client bl.spamcop.net

Now we need a POP3 server installing:

sudo apt-get install dovecot-pop3d

The following command enables clear text authentication over POP3:

sudo sed --in-place "s|#disable_plaintext_auth = yes|disable_plaintext_auth = no|" /etc/dovecot/conf.d/10-auth.conf

The following command sets the mail_location in Dovecot:

sudo sed --in-place "s|#mail_location = |mail_location = mbox:~/mail:INBOX=/var/mail/%u|" /etc/dovecot/conf.d/10-mail.conf

The following command ensures that Dovecot has permission to delete emails from the inbox:

sudo sed --in-place "s|#mail_privileged_group =|mail_privileged_group = mail|" /etc/dovecot/conf.d/10-mail.conf

Some e-mail clients will require you to specify an out-going SMTP server too. To make this easier you can enable SMTP access to this server. However, this will allow junk mail to be delivered to the test mailbox so avoid it if you can. To enable SMTP access run the following command:

sudo sed --in-place "s|inet_interfaces = loopback-only|inet_interfaces = all|" /etc/postfix/main.cf

Any mail sent using this SMTP server it will be routed to the mailsink in-box.

If you are using UFW then enable POP3 access:

sudo ufw allow pop3
sudo ufw allow smtp

Now restart the affected services:

sudo service dovecot restart
sudo service postfix restart

You can now configure your email client to access the server using POP3, you should configure it to leave messages on the server so that you can test using different email clients. If the server is accessible over the internet then you can configure GMAIL and HotMail to access the mail using POP3.

Run the following command to send a test email (to verify that all email addresses get re-routed just use a real email address that you have access to, if this email doesn’t reach that address then everything is OK):

echo $'Subject: TEST\nHELLO\n.\n' | sendmail -t here@there.com

Product Backlog Tools – How to Prioritise Using the Cost of Delay

Whats next image

Have you wondered how to arrange a to-do list in the correct order? Probably not. There’s nothing to learn, right? Clearly, prioritising what to do next shouldn’t be hard. But don’t underestimate how costly a casual attitude to this can be. Most of the time prioritisation is done on gut feel, but a sound mental model of the economics will speed up planning, help resolve differences of opinion and bring in money sooner.

We use an input queue, often called a backlog, to pull work items from. The backlog is nothing more than an ordered list of things to do, with the next most important thing at the top.

Given a list of work, most people go about prioritisation by choosing what’s important to them or by identifying what’s important to users based on gut feel or customer feedback. Gut feel or feedback is fine but it’s not the whole story and comparing the right criteria can make the ordering easier and more profitable.

A cleaner, more measurable approach is to focus on the opportunity cost. The opportunity cost is fancy way to say, what will it cost not to do something. Another phrase often used is the Cost of Delay or COD. A commercially clear way to compare two backlog items is to ask, “What will it cost to delay doing this item compared to the item below it”. By placing the more expensive of the two above the other, then moving down the list a Cost of Delay ranking will emerge.

The Cost of Delay could take the form of lost sales or subscribers, in other words market share. Or, in the case of back office process automation, the perceived savings in time and labour not kicking in. Putting exact financial figures to these isn’t easy and is generally unnecessary. Reducing the comparison to the single cost variable and using whatever past data is available to make a judgement, is normally enough. There are some interesting techniques and collaboration games that can improve speed and objectivity still further; but that will be covered in a future article.

Tracing everything back to money may seem cold and ruthless, but in the world of commercial software it is reality. Even cool things like beautiful, contemporary design benefit from this Cost of Delay comparison. Users tend to place greater trust in pleasing and nice to use websites and this trust factor will influence sales and market share.

Sometimes the order of work seems inevitable and based on perceived feature dependencies. For example, it may seem essential to for users of an on-line web service to be able to create an account profile before doing anything. With care, creativity and engineering jujitsu this could be reversed allowing other revenue generating features to be released earlier. If you’re sceptical checkout how Doodle handle no user accounts.

Ultimately breaking functionality down into mini features that may be delivered early and crafting the build schedule with an eye on the Cost of Delay is the mark of a canny and experienced product development team.

Building RaceBest

RaceBest LogoThis year saw the launch of a new service from Leeds-based start-up RaceBest Ltd. NewRedo were responsible for building their online system and we continue to host and support it.

In building the system we utilised some of the latest open-source technologies to build an interactive website for the running community. The service includes a race calendar, online entry, results and reviews, all managed by RaceBest staff using a custom-built administration section.

The front-end uses mainly client-side rendering powered by AngularJS, backed up with server-side pre-rendering for clients that don’t have JavaScript or search engine spiders. The back-end uses document-oriented database system CouchDB and node.js (proxied through NGINX) as the application server. Running on a small server, keeping the carbon footprint low, the website is really fast and was barely under load when it took the 750+ entries for their first major event.

Agile Yorkshire Software Development Community

Every month NewRedo plays host to the Agile Yorkshire community group. Fifty people come together to listen and swap ideas. It could be more but we’ve only room for that many.
At NewRedo we’re very keen that the process of product development a team effort and not just about writing programming code, and what’s great about Agile Yorkshire is it attracts people from all disciplines. From engineers and UX designers to senior managers, they all come together in the same room for the same reason – to open their minds to better ways of work together and creating things with software.
To keep that diverse group happy requires a careful mix of topics and levels. Helping project managers with their knowledge of BDD frameworks and hard core programmers appreciate the cost of delay provides value to everyone in the end. Another key mission of Agile Yorkshire is to promote new ways of working to a new audience, so having some entry level content also plays a part.
This month Grant Crofton and Mike Burrows provided contrasting slots on F# and Kanban respectively. It was a balmy summer evening and discussion flowed on afterwards into the pub next door. The best part about any community gathering.

Recent Documents First in Apache Lucene

Having spent a long time looking for how to put the latest documents first in Apache Lucene to no avail. Finally, I’ve found a solution that works.

Most of the answers on the web suggested using a boost on documents based on their date. However, I was unconvinced how these solutions would pan out in the long term. The other day, I came across Apache Lucene Sort Tips which describes how to use the TopFieldDocCollector. By chance it mentioned the constant SortField.FIELD_SCORE that can be used when constructing a multi-field Sort object.

So, the answer is simple, but I thought I’d write a post specifically addressing this use-case so that an answer is easy for others to find. You need a field containing the modified date of all your documents. Storing this as an ISO 8601 string does the trick. Now you construct a sort object passing SortField.FIELD_SCORE as the first field and your date field (descending) as the second and hey presto!

So, here’s how we create our sort:

var sort = new Sort(new[] {
    SortField.FIELD_SCORE,
    new SortField("last_modified", SortField.STRING, true)});

And use this with a TopFieldDocCollector in the usual way.

Massive thanks to the author of the original post. I just thought it was worth posting something specifically for this use-case.

UX Design with Persuasion, Emotion and Trust

Picture of Mental Notes PET designers cards.Any web site must be useable to be useful. People must be able to find what they’re looking for and get things done. A website with poor usability makes users frustrated, feel stupid or both. The likelihood of them returning drops, especially if they can find an alternative website to visit. As the web matures usability is improving. Experience and knowledge in what works and what users like is becoming more common. A usable website is no longer enough, the bar is slowly shifting from – can users do something, to will they do something. Building a website that converts browsers into customers is now critical to success. Of course this stuff has traditionally been the bread and butter of a marketing team; but as roles blur and attention shifts, web designers, usability experts and software engineers need to understand these concepts too.

Designing for user engagement in this way is often called designing for persuasion, emotion and trust or PET for short. US company Human Factors has popularised this approach and offers specific training. If you want to dip you toe in the water Stephen Anderson has produced some great looking cards to help start thinking about these concepts. Enjoy.

NewRedo Co-founder Royd Brayshay Nominated for Agile Award

2012 Agile Awards nomination logo

NewRedo Co-founder Royd Brayshay has been nominated in the Best Agile Coach category of the 2012 UK Agile Awards.

The objective of the UK Agile Awards is to recognise the People, Projects and Products that have contributed to the success of Agile in the UK.

The UK Agile Awards is a not-for-profit organisation and the Awards are open to the entire UK Agile Community.

Royd was nominated by recent client HML as having “made a significant contribution to HML’s transition to new ways of working in IT, enabling a new (to agile) team go from formation to first valuable delivery in a matter of a few weeks”

For more information regarding training and coaching services offered by NewRedo please email us at: info@newredo.com and sign up to our regular newsletter.

Why User Story Points Help Agile Effort Estimation

Story Points seem to enjoy a perpetually difficult place in the lives of many agile teams. They are the source of much confusion and teams often choose not to adopt them.

The traditional practice of quantifying work in days or hours seems so obviously straightforward. Redefining time must only bring confusion – surly? Others have done a good job of explaining the direct relationship between user story points and time. Here we’re interested in why use them at all.

To understand the concept better try thinking about the overall plan and not individual user story cards. Aggregating user story points reveals the size of the mountain ahead. Using data collected from the past reveals how fast the mountain will be climbed. The beauty of Story Points is not the points themselves but their use and calibration against past progress data. Not knowing a team’s past work rate, often called velocity or flow, is a common missing link.

Using the mountaineering metaphor there are many things that affect a team’s speed of assent. Bad weather, altitude sickness,  or getting lost will all influence progress. Looking back at each days altitude gain will give an increasingly accurate prediction of reaching the summit. Mountaineering, like software development, unfolds unpredictably. The key point is that the plan can be recalibrated with each day that passes. As more performance (climbing) data accumulates the clearer the outcome becomes.

If a software team uses time , much of the context is missing. If they estimates a job at ten days, Is that ten perfect days? Is that ten days at sixty percent efficiency? Do they need the best talent in the team assigned? What if the team doubles in size? Re-estimation can help when things change or the wrong assumptions made; but estimation is expensive. What if a hundred user story cards exist? A day or more could be lost and re-estimation needed multiple times.

Image yourself on holiday in an unfamiliar country. You want to visit a notable tourist attraction and, to make a plan, you ask a local how long it will take to get there. They respond with “two hours”; but is that by bus, on foot, or in a rental car? It could be five or fifty kilometres away. Alternatively, if the answer was “eight kilometres” you can mentally calibrate based on the available transport and, importantly,  your past knowledge of its velocity.  If your transport changes you intuitively recalibrate. Using none time based sizing is unambiguous and more versatile.

Sizing with a non time based unit and calibrating the volume of work ahead against the rate of work completed, provides a constantly re-adjusting prediction of the future.

Knockout JS Validation using an AJAX Callback Result

I was looking for an easy way to implement a validation based on the result of an AJAX request without making it synchronous. I noticed that Knockout JS Validation wraps it’s calls to validation functions in a computed observable. This means our validation result will be dynamically re-evaluated if we base it on another observable.

The trick is to create an observable into which we feed the result of the HTTP call-back, and use that in the validation function. Here’s a code snippet using JQuery too, it assumes there’s a service that checks that the field value is unique and returns true or false.

var viewModel = {
    myField: ko.observable(null),
    isMyFieldUnique: ko.observable(true)
};

viewModel.myField.subscribe(function () {
    $.getJSON(
        'myservice?myField=' + escape(viewModel.myField(),
        function (result) {
            viewModel.isMyFieldUnique(result);
        }
    );
});

viewModel.myField.extend({
    validation: {
        validator: function (val, param) { 
            return viewModel.isMyFieldUnique();
        },
        message: "myField is not unique."
    }
});