Tuesday, December 13, 2016

PM Tools


Think about a restaurant where people can choose what they want from the menu. How would you organize 8 workers in the kitchen to prepare 27 meals in the shortest time? Project management tools are meant to support resolving this kind of problems.

A company can have 2-3 projects...or dozens of projects to work on at the same time. Each project can be taken off in processes (some of them must be done sequentially, others can be done in parallel), and each process can be taken off in activities.

Both processes and activities can be prioritized and re-prioritized as needed.

Gantt charts are classic tools used for a reduced number of slowly moving projects with well-known outcome - in construction you have the design and you can do good estimates for processes and activities.

Kanban tables are newer tools used in agile project management, and they are good for visualizing processes of projects with many unknown aspects - in service-based companies you have to redistribute workers and machines in function of the incoming commands and technical incidents; the IT&C industry is dealing with the most rapidly changing aspects like traffic on the wire/air and user requests.

There are many dozens of PM tools out here, I'm going to list three of them.

Trello for 2-3 small projects: https://trello.com/

KanbanTool for bigger projects - it has "swimlanes" (projects and processes can be kept together and handled easily), "sub-tables" (for complex projects) and in general it scales well as a company is growing: http://kanbantool.com/

Microsoft's Project - although I've never evaluated the costs for adopting Office 365 with additional goodies: https://products.office.com/en-us/project/compare-microsoft-project-management-software

Thursday, July 28, 2016

Dotnet Core On Duty


Several hosting providers have added ASP.NET Core 1.0 to their offer. In other words after near four years  of client-side presence .NET Core is now on duty at server-side.

The hardware and communication technologies emerging after the first  version of the .NET Framework have made necessary a major redesign of Microsoft's managed code environment, and finally .NET Core is ready to go for early adopters.

Some people testing or evaluating ASP.NET Core's feature set are wondering why it doesn't include a mailer, an image processing library, a DataAdapter or SignalR implementation?

In my opinion this happens because it has been designed as a modern multi-platform tool with loosely coupled architecture and dockers in mind, employing with maximum efficiency the appropriate platform-specific software resources.

The server-side operating system of your choice already has native tools for mailing, charting, generating images, rich text, data sheets or handling multimedia files, and those tools will certainly work with better speed and stability than a generic library.

ASP.NET as a middleware does not need to double the role of a web server, a game server or a media streamer, those roles are normally delegated to specialized local processes - their concrete pros and cons depending on the server operating system or your third-party vendors.

From project management point of view defining clearly the objectives and domain of relevance are key aspects of a successful project, thus in my opinion ASP.NET Core is on the right track and it's a good choice for new projects with service-oriented architecture.


Friday, July 22, 2016

Multi-platform Development Now and Then


Between 2002 and 2004 I was a brave full-time enterprise employee preparing myself for a freelance career. I've spent some time playing with html, JavaScript, Java, PHP, C++ and several Linux distros.

That time developing cross-platform code has been not only a shiny perspective, but a common user requirement, and getting one's Windows applications running on Wine was a cool feature.

When I've had the time to look into the core of a free multi-platform programming language framework written in ANSI C (like PHP or Python), I've realized the dimensions of the human resource investment and dedication necessary to develop and maintain such products, which predestinates them to be an oligopoly in terms of market structure.

Between 2005 and 2008 it was still a good decision to invest in multi-platfom applications - that time the libraries based on managed code  were not mature enough for serving efficiently a considerable list of market demands, typically coming from domains where the business  processes have been changing rapidly.

In the meanwhile the spread of multicore processors and broadband Internet services have made possible the spread of new programming patterns focused on better server response times and more responsive user interfaces with rich, internationalized content - multiple challenges asking for refactoring classic libraries.

During the previous decade the hardware industry has evolved more rapidly than the software for the new technologies, that's why mobile operating systems like Symbian or now Android could achieve so large popularity - it has been necessary to put something on the new hardware to get it working and doing sellings.

Due to the continuous diversification of processors and hardware architectures an increasing number of software companies have started choosing Java or .NET for their long-term projects.

For a small or medium software company doing cross-platform coding is not a financially feasible option anymore, and the managed code is employed between others for execution speed improvements.

The development efforts invested in .NET and targeting parallel and asynchronous programming led to a solid foundation for business-critical apps doable within the limitations of a concrete triangle of budget, time and quality.

.NET is my world, with Java I'm not familiar enough to write about its recent evolution.

Thursday, July 21, 2016

Successful Software Projects?


Reid Hoffman's opinion about early user testing is now taught in courses: “If you aren’t embarrassed by the first version of your product, you shipped too late.” This happens because nowadays prototypes are used for presenting brand new product ideas.

Prototypes are working drafts, similar to mockups used by architects. The look and feel of a software prototype is pretty much the same as of a software product, people can get their hands on it, although most of its functionalities are missing or being replaced by static content (input forms, reports etc).

Prototyping is not a new software development methodology, you can find 20-30 year old custom utilities started as throw-away prototypes and then kept in production forcibly by their fans, regardless of the costs implied by such decisions.

Unfortunately it happens frequently that the investors are behaving such like the above mentioned fans and they are pushing early prototypes in public zone instead of targeting a limited and knowledgeabale audience of alpha testers.

Once an early prototype gets to the free and highly competitive market, the feedback coming from informed users will certainly not be positive, even if the product idea itself would have been great.

The toolset used for preparing a prototype is also a key factor.
Currently there is a big number of scripts, libraries, frameworks and database engines, which are suitable for rapid prototyping and minimizing the costs for getting a prototype out of the door ("fail early and fail often" applied in the context of a product portfolio).

The problem with most such tools is that they might have serious limitations regarding aspects like scalability, employing cloud technologies, internationalization, extensibility or integration with other software.

In case a low-cost prototype gains popularity and investors, sooner or later the original code and tools have to be replaced, which usually implies changing teams, a decision which might be fatal for the future of a product.

After all, software are extension modules to hardware and due to the market pressure continuous change management is needed for keeping a software product attractive for users.

The key figure of successful software projects has always been a manager receiving sufficient respect, trust and funds for selecting the right resources for doing the right thing in the right time.

Personally I'm after engineer-led startups, a sane new trend.

Workbook Versus Database


The first step towards a database is usually one's first Excel sheet. In time one's sheets get collected in workbooks, then shared within a team or distributed to a number of persons.

Finally when the mess runs out of control, some IT guys do save the business by creating a database and processes for updating it.

Most of the time users spend a number of years with their growing sheets, and sometimes they are investing in additional hardware for being able to continue using some "cool" sheet, which includes hundred thousands of formulas.

Excel is employing so-called "tight loops" for getting through the calculations as quickly as possible, and these tight loops are big resource consumers.

Sooner or later a sloppy sheet with too many formulas is going to challenge too much the operating system's resource management capabilities. In other words workbooks are not scalable data containers.

Theoretically it's possible to remove all the formulas from a sheet and to use VBA or other scripts for calculations.

In VBA tight loops can be mitigated by enforcing them to output some value in a cell from time to time (external interrupts give way to the garbage collector to do its job).

The real problem is that beyond a certain level of complexity implementing one's business logic in VBA (or other script) and workbooks would lead to more expensive and less reliable software than opting for a scalable database solution. 

Free Source As Public Library?


In 60 years IT evolved from a subject matter for scientific research into manufacture and then industry with its own design, management and quality standards. 

Now there are practically three generations of specialists activating in different organizations, who have assisted these major shifts and have had to drop the “previous” technology and business model for finding their way with the “current” one. 

20-25 years ago the software development was still manufacturing-based, and the market was so hungry of new products, that the users have accepted testing alpha-stage free software packages in their own time.

Then in a decade the global spread of the Internet has put pressure on developers to invest effort in architecting loosely coupled solutions rather than growing monolithic desktop applications.

The investments coming from commercial companies have helped the software business to evolve from manufacture into industry, and the interoperability needs have been beneficial for standardization.

10-15 years ago the software market saturation and the economic slowdown have affected numerous software companies.

Most companies have adopted free software usage and outsourcing as collaboration models in order to consolidate their businesses and partnerships. During this adaptation process the limitations of the GPL license became evident, and other licensing models (MIT, Apache) became more popular.

As the life-cycle of a software product is of 3-5 years, the current market offer includes a considerable number of applications employing many modules designed with earlier hardware architectures in mind.

On the other hand a server-side interpreter or a database engine with threading problems don’t play well in the cloud, and JavaScript is not suitable for everything needed at client-side.

Short-time investors might be right when opting for extending the life-cycle of a classic software product, but on the long run selecting carefully new tools and using them correctly will keep the boat floating. 

(2014 - 2016)

How Much Is Too Many?

When designing or updating a software product, one need to consider both human and technical factors. Whenever I'm referring to a form, the same goes for web pages.  

Some Biological Limits: 

Frequently auto-refreshing tables (each 10-20 seconds) and frequently opening or closing forms are very tiresome for the eyes.

The visual acuity is a spot (a spatial arc of 10/360 degrees), suitable to hold a text column for reading - in practice this is about 40 characters per line with the default medium font (when one's eyeballs are enforced to move horizontally, the text line is too long).

Some Psychological Limits:

When a document's background image distracts the attention (intensive colors, crowded patterns, reduced contrast between background and text), the users get tired in short time.

Our distributive attention can manage up to 6-7 things, consequently a form should not have more then 6-7 groups of controls (menus, tabs or groups) and a group should not have more then 6-7 controls (except labels).

When this limit is exceeded, the user is feeling overwhelmed and frustrated by the user interface.

The Business Logic:

The quantity and frequency of the data exchanges with other computers decides what networking solution we need.

The type of database engine, software and hardware we need depends on the number and contents of documents, archives and processing requirements.

Resource Management:

A Windows application is a collection of modules (executables and libraries), which can be shared by multiple users across a network.

Our local copy needs to accomodate with our local hardware resources, and it should behave decently when consuming network resources or accessing remote servers.

Forms and Controls:

When designing the user interface it's important to keep a good balance between user requirements and technical limitations. 

Considering the answers to the below questions will help in structuring the user interface.

How many forms can I have in a project? Maybe 2, 4 or 10 - it depends on how many controls contain in total.

Each control is a window using memory, handling events, and it needs to be (re)painted from time to time. Consequently each control is a resource consumer of RAM, processor, and video capabilities.

How many controls can I have on a form? Counting all the object tree items for a form can help us evaluating our forms. If our form contains more then 50-100 controls, it's time to think about splitting it in two.

It's important to know, that using tabs is good for making our user interface tidy, but all the controls present on a tab are created "at once" with the parent form, regardless of their visibility.

The more powerful a machine is, its limits are the higher, but spreading controls over multiple forms will always be a necessity. 

(2011 - 2014)
 

Welcome!


About two years ago I've written my first blog entry in a private space, and now I'm feeling ready to go public. 

As an old school IT generalist I've made my way from custom-made proggies to standardized packages, and in the meanwhile I've discovered that in our industry the study materials are changing in less than a decade.

"Welcome to My World!", and your constructive criticism is more than welcome!