Search This Blog

Showing posts with label Software. Show all posts
Showing posts with label Software. Show all posts

Wednesday, 27 February 2013

Building Agile Teams – Not sure?

 

In the process of adopting agile methodologies, some organisations don't really pay attention to how these teams are formed. Forming a team for a project is different from forming teams in organisation. The latter needs a lot of thought, compromise and patience. owing to the needs of the business/client. I don't understand why organisations bother adopting agile without this view. It is all done for that client/product/project who is important, yes that's the primary objective but then that doesn't cut it when you mess with the peoples minds when we constantly moving them around teams.  I would go as far as saying at least 30-40% of this effort is wasted,  every time a team is put together they go through the same cycle of storming, forming ,norming and performing. I really think this is the most underutilised concept from Hoffman, It is used as punch lines on slides with little attention to the consequences of the process. Teams formed should not be broken without valid reasons (sorry multiple projects and cost are not good enough reasons, let me know if you have other reasons, I know I have a few)

At this point you will be thinking this is just the usual crap people talk about. Here is a hint if you use the word resource for a team member, you have never paid attention to the do’s and don’ts listed below. The change in perception that's needed is, that a skilled technologist in one context is not productive or efficient in another. The context here is that of a product , project or service being worked on. This context is never constant in IT and all you can do is keep the team constant, (even your machines are changing with updates everyday). Can two moving parts in a system bringing stability unless the forces negate each other? Are there exceptions not sure…

1. Don't build teams of specialists.

When companies organise teams by discipline such as testers, analysts , designers, and developers, All they have done is create silos of specialists who are not effective in how they work together as a software development team. The dynamics between individuals with in a team are quite different to the ones who work in teams founded on disciplines. That directly reflects on how efficient they are and how productive they are. These teams based on disciplines create the biggest hurdle. How will you align the goals of these teams based on discipline with the goals of a project they working on ? It doesn't make sense because they achieve neither.

2. Don't share people unless they are specialists.

The whole theory that people in a team founded on a discipline are specialists is not true. You become a specialist by virtue of doing something valuable in a project or system. Do not share the basic functional roles of analysts, testers and developers between projects, it is not as efficient on cost as you perceive it to be.There used to be a time where i used to acknowledge the cost benefit of sharing certain roles between projects, but increasingly in recent times I have lost faith in this idea. To be honest it does more harm and costs you more (not accounted or measured most times) than any real cost benefit. I don't think the human mind is capable of treating two goals with equal priority if a person is shared between teams, If ever you wanted to do this, it may be achieved in a cross functional team, where a specialist is shared and the team has a bunch of all-rounder's.

3. Build Cross Functional teams

Building software has been split into so many functional roles these days, it is beyond me now to understand why.  Is it difficult for someone to be able to do a bit of analysis, and testing along side developing code. I am not asking them to be a specialist in those disciplines neither am I asking them to change there careers. I personally feel quite proud of being able to do more than one functional role. By being cross functional all that is expected is you should be able to do other kinds of work required for the team so you keep the work flowing on your Kanban. But I have noticed teams increasingly blocking work and team members not picking a card that needs to move further to done. This may also be compounded by managers feeling the need for specialists only to do that work. Many a specialists have screwed up and I am witness to it, so what is troubling teams to encourage new bees/ all rounder's to take a chance. There is always the specialist to come back and review it?

4. Encourage common goals for the teams.

Career progression and teams based on disciplines have murdered the whole system and have pampered individuals into thinking that it is normal to be able to work in one functional area and it is not there responsibility to do anything else that is required to achieve the common goal. Are they specialists in there own functional area ? I doubt this theory because a majority of them are not really any kind of experts they do just about enough to keep it going and make just about the same mistakes in judgement as other team members in other functional roles. The corporate annual review system has completely brain trained people to think that they need to meet there personal goals before the team goals (very few places have these). Since this is tied up to some kind of bonus , the motivation is pretty high to get at least your personal goals right. This system in my view is the worst evil in most organisations and is a passive killing machine.

5. Don’t encourage Big Teams

Any team more than 8 – 10 is big. By building teams larger than this size you end up increasing overheads. This further translates into team members beginning to feel they don't get enough information and time to function properly. This leads to lesser visibility on the flow of work and pushes some members of the team to delegate work and in some extreme cases to micro manage,

6. Don’t use Agile if team is working as a resource pool

Agile cant work with this model where multiple people work in  multiple projects as if they were a single team. So don't force agile to be used if your resource management is based on this model, and ask why agile doesn't work. Best not to use agile in this model.

Wednesday, 30 January 2013

The Agility Grid in organisations

We all work in an industry which is called Information Technology where information is the most under-rated commodity. We have structures in place which were not modelled for this industry. We have been retrofitting our operational and management models based on principles of other industries. The irony of all this not all of us are from the era that this started nor are we from other industries. A majority of us started our careers in this industry and yet we have some of the most rigid organisations and now in the last 5 – 10 years we have been trying too hard to make these structures agile and flexible.
Organisations have structures, some horizontally aligned and some vertically aligned. When organisations embark the path of using/enabling agile practices (which is the trend at the moment in IT), they tend not to reorganise themselves to enable this. A very typical observation has been that a lot of change happens from middle management to the operational teams (in IT development teams).
I think at this point it is important to reiterate that being agile is not just a following good software development practices and processes and cannot be purely achieved by fixing one business unit. It is really an outcome of collaboration of several business units in the simplest possible way, so that when they work collaboratively the outputs of each business unit enable other business units to work efficiently with agility in an iterative way. The iterative nature needs to be reflected in how they hire people, how they budget, how they specify products and also how they sell them. Purely developing a product iteratively is only the first step. Reflecting on this and modelling your business around it is the optimum path for a business which is agile
Here is my theory which i have been pondering with for a while in my head, it is in no way a pro matrix management theory, however it does seem to present hints that it may be the way to go if planned and executed with care.
The Grid
An organisation is a grid effectively, where each line has people and resources which produce forces. These forces operate (upwards/downwards) or (left/right). At all the intersections of the forces on the grid, the forces align/oppose each other to enable flow of information and change. Where forces oppose each other you will find resistance and inability to produce results , where forces support each other you will find progress and agility.
No I am not preaching matrix management but I am trying to apply forces in physics to analyse the situation and find out which organisation can adapt to agility better. I am not sure everyone will agree but the majority of organisations have decision makers along the vertical lines and enablers on the horizontal lines. This is purely based on what i have observed than anything in some ways my interpretation
In horizontal organisations, there tend to be more enablers than decision makers. This is fine so as to make progress and internal changes flow smoothly. However the ability  to react to changes i.e external forces (could say Porters 5 forces) is reduced. This is inevitable due to a reduction of or slower decision making process. So are we saying this is not going to work. No, on the contrary seems like a horizontal organisation is an easier one to model and adapt to agility. A combination of collaboration practices that balance internal forces and and how information is relayed and used make it a more viable change. Collaboration is not just about how people work with each other it is also how information is relayed and how it is consumed in the organisation. Horizontally structured organisations with good communication channels and a democratic decision making process could adapt to agility more efficiently.
So where does this leave vertical organisations, obviously based on the grid theory above there tend to be more decision makers than enablers, This is fine however progress is limited as there are less enablers, You may find that organisations which compete on market share generally fit in here, they tend not to be the innovative ones, even if they did it might be worth looking at the product life cycle of these organisations and see how short the life span of there products are. P&L’s may tell otherwise, but look into these organisations and you will find that they have a tinge of diversifying into multiple markets or industries. A relatively high ratio of  decision makers,tend to put these organisations in the mode of reacting to incidents than change of any form. This is obviously because there are lesser enablers with a reduced capacity to induce a positive change or any form of improvement. This organisation structure is the the toughest nut to crack. However this is the most common structure in the industry.
Management structures reflect this aspect more than any other form.This I believe is a consequence of modelling our management structures along the lines of the manufacturing industries, compounded by factors such as culture, control and power. This reflects itself in governance and operations. I am not going to elaborate any further but I simply feel the manufacturing line model of management from the 80’s no longer suits the 21st century organisation which is aspiring to be agile.
Move on Management 3.0 is that the answer, not entirely , there is more to an organisation than management, We haven't even skimmed other business functions such as innovation , sales, marketing , strategy etc .. These aspects need to operate differently, which comes back to validate my original conclusion that it is not enough to just change a few business units to achieve agility, you have more on your hands than you think you do..

Wednesday, 15 June 2011

ScrewTurn Wiki

I was looking for a some kind of ASP.Net sample site purely to demo some BDD scenarios at work, but then I wanted to do it on a site which is more complex than the usual ASP.Net sample site made of Customer/Order.

I found a couple of Wikis, but the one that caught my eye was ScrewTurn Wiki. First things first it is free under the GPLv2 license (for more details on commercial licenses see Commercial License Help)

The installation took less than a few minutes using the Microsoft Web Platform installer, You install the Wiki in one of two modes file system storage mode or SqlServer storage mode (just use SqlExpress). To choose which mode you want to install. See Installation Help for more details. Apparently you can go with file storage mode and then switch to the SqlServer data storage mode later (Data Migration)

The fact that you can manage the ScrewTurn Wiki using Microsoft WebMatrix is simply brilliant. The ease of use and ability to be able to publish the Wiki is simply useful. You can pretty much configure your hosting details if you wanted to host something on the internet and keep pushing your changes.

Now for Plugins, quite a lot of them seem to be available. There are vast number of navigational, text editing and data provider plugins. In addition to this you can customise different portions of the Wiki using your own providers , this seems like one of those things that was given a great deal of thought. See Custom Providers

I guess I am very impressed by what the Wiki offers, but looking at the features I am actually wondering if a product which was a Wiki is evolving into a CMS? Not sure, cant say I am bothered either, the only reason I raised that concern is the Wiki as is, is pretty simplistic and this is what appealed to me, building too much into could make it bulky and complex. I am just a developer so I cant give an accurate view of what users of the Wiki would want. On the bright side there are some really new features that are coming and that can be leveraged. V4 CTP offers native Azure support which should be good if you wanted to use Cloud based services I guess. See Roadmap for more details

Thursday, 26 August 2010

The N+1 Iteration syndrome

I constantly ask myself if i know what is next at the end of a sprint or iteration and that I should make an effort to know what is coming up next, I observe that just like me the members of the team are only focussing only on the card that there magnet is on in the current iteration. Adapting to a constant flow of user stories and requirement is not easy for any team and is as important as focussing on the stories in the current iteration. We as a team focus on the user stories in the board, but it may be worthwhile asking ourselves how many people in the team are really aware of what is coming up in the next iteration. If members in the team were asked to answer to this question honestly you will find that a vast majority probably don’t have much information or are totally ignorant. I prefer the term N+1 for the next iteration. In most teams I have worked this is a problem that is evident in one form or the other and some common symptoms I find are the ones mentioned below.

Symptoms

  • Analysts find it frustrating that they have to repeatedly read the story out and explain the same story more than a few times.
  • Team velocity sways massively and the standard deviation to average velocity is quite high
  • Requirements workshops are almost absent and it seems like analysts are in a different time zone on the user requirements on most occasions when compared to the team.
  • Team members are not sure about the size of the story and try to come up to a size as close as possible to the rest of the team rather than putting any effort involved in understanding the size of the story.
  • Constructive discussions, debate and any implementation concerns are almost absent
  • The team seems to easily agree on the size of the story and gets swayed into a conclusion by anyone who can speak the team into a conclusion
  • Large stories seem to be finished earlier than they ought to be and some of the smaller stories seem to take more time and some times end up looking like large stories.

This syndrome manifests itself in different ways and consequences range to varying degrees of severity on the functioning of an agile team. The team should address this situation if they do find these symptoms, the effects of not addressing this problem could result in false velocities, skewed metrics, increase in cost of the project and finally manifests itself in a loss of trust from the users for whom we actually work on the project. I wonder if I am making a big deal out of this, but this may be because I perceive the consequences of this syndrome to grow exponentially into bigger problems and can be quite damaging for the future of the team and the project.

We can mitigate some of these symptoms, a few ideas that allow you to improve and move in the right direction are below

  • Introduce a N+1 sprint section on the left and side of your Kanban or sprint board and line up stories that will flow into the next sprint.
  • Encourage analysts who are working on N+1 Q to speak about there analysis during your stand ups, this helps spread awareness of the N+1 iteration on a daily basis. Truth is in an iteration the analyst is probably working 50% of their time on the N+1 sprint and the other current sprint.
  • Encourage your team members to pair with analysts and discuss and learn what they are working on, if you can allow your developers and QAs to pair for 5% of the iteration on a rotational basis with the analyst. These pairing sessions really helps non technical analysts to learn a few tricks and understand why you would think the story is complex or simple
  • Have mini 15 minute sessions every day after the stand up to pick up one story from the N+1 board and discuss with the analyst, testability and implementation details. This will mitigate the loss of requirements workshop they are long and can be boring anyway , small cycles of these sessions will get the team to be constantly involved in requirements.. the term cross functional teams was not coined just for developers and QAs , it did mean all functions in the project.
  • Have some ground rules for your planning session,
    • Team comes attend the planning session with an awareness of the stories flowing into the Kanban,
    • You really don't want estimating to eat up all your planning time, clearly planning is not only to estimate it is also about discussing priorities and setting goals for your iteration, spend some time planning how you would action retrospectives as well.

You will see that the team will at least loose the perplexed “I don't know what you are talking about look “ and the “I cant be bothered” attitude , this could be a good starting point to address the problem. This will allow your team to be more involved in planning as much as they are involved in the progress of the sprints.

Monday, 23 August 2010

Authoring and Automating - User Stories

In most agile development teams the responsibility of writing user stories falls into the hands of the analyst. The analyst not in all cases may be well versed with the idea of writing stories. This is not because he does not know what to write but sometimes because he does not know the best way to express the story in the chosen story writing platform. This doesn't warrant a developer to pair with an analyst to author a story, In my opinion developers are not welcome to pair with the analyst to author user stories. Allowing this will allow implementation detail to find its way into the stories and sometimes they dictate the users intention.

Authoring stories

The best person for your analyst to pair with should be your QA, this proves to be the most useful.

  • The QA looks at a story early in the life cycle and ensures all aspects of the story are testable.
  • The ownership of the story is with the QA and he/she is able to identify any automation concerns of the story..
  • Any scenarios that have not been through in a story due to data related anomalies are identified.
  • The QA is involved in this process early on before the iteration in which the story is picked up , this will allow the QA to bring in some valuable information on the size of the story to the planning session.
  • Since the QA gets an understanding of the story before a developer is involved his view of the story is as close as possible to the users requirement in the story. This important to make sure the intention of the user is not skewed by the understanding of a developer.
  • The QA is able to identify any smoke tests that may be required to be run when a release is deployed to UAT or Live.
Automating stories

In our current project our QA starts automating user stories when he runs out of stories to test. In most cases the QA to dev ratio is 1:2 or 1:3 and so the QA gets bogged down with implementing acceptance criteria so the team has enough stories to dev on. It helps for devs to pair with QAs to automate acceptance tests and my observation has been the following

  • On a normal day we developers are more in sync with writing better code than QA’s, developers can always help in writing better test code.
  • When developers implement the acceptance criteria in the form of Given When Then, they actually are implementing the story itself.
  • Developers will get an idea of how to implement the story and tests required when they actually develop the story.
  • Where the story is looking for new elements on the UI, developers can aid in mocking the UI for the story else automating all the steps of the user story could be a night mare for the QA all on his own.

PS : My Selfish reason - Developers learn a new language .. I learnt ruby this way :)

In effect when three different people with different skills are involved in the authoring and automation of the story, this will ensure a lot more analysis happens and more often than not edge cases are discovered ahead of development. Any edge cases which will increase the cost of the story can be identified and a decision made taken if they have any real value in development.

Monday, 28 July 2008

Concurrency Control in .Net Data Tiers

Most of our data in applications is stored in relational databases, although there are other ways of storing data I guess about 75% of applications use RDBMS. While using these databases we write .Net code which functions correctly but may incorrectly write, read or handle data in the run time. e.g. I have found a common case where developers tend not to use some of the features built into the RDBMS system under the covers, and concentrate so much on getting the application tier right, that they fall into pitfalls that arise out of data handling , One such issue is concurrency of data, It is one of those things developers don’t look at when they hit a data handling issue. a consequence is a lack of transaction capabilities , sometimes they don’t analyse that the root cause could be data handling. Agree or deny I at the least cannot think of applications used for business not to have transactions. More often than not business processes in a application translate into a transaction block or unit.

Concurrency is one such issue which can be controlled and exercised. It can be in an optimistic or pessimistic mechanism, whatever be the mode exercised it is important to understand the concept and how it can be exercised.

Optimistic concurrency lets the last update succeed and can make data viewed by the end user dirty, if a update happened since data was displayed to the user. It is the applications responsibility to determine stale data and then decide to perform an update based on the decision it takes. Optimistic concurrency is used extensively in .NET to address the needs of mobile
and disconnected applications, where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications

In the pessimistic scenario, read locks are obtained by the consumer of the data and any updates to the same data are prevented, Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be
locked for relatively large periods of time.

These are the mechanisms you adopt. To implement this in a data tier of a .Net application developers use transactions

Most of today’s applications need to support transactions for maintaining the integrity of a system’s data. There are several approaches to transaction management; however, each approach fits into one of two basic programming models:

Manual transactions. You write code that uses the transaction support features of either ADO.NET or Transact-SQL directly in your component code or stored procedures, respectively.

Automatic transactions. Using Enterprise Services (COM+), you add declarative  attributes to your .NET classes to specify the transactional requirements of your objects at run time. You can use this model to easily configure multiple components to perform work within the same transaction.

To decide if u want to use transactions in SQL code, ADO.Net or Automatic transaction I found a simple decision tree which allows you to make a decision on which method to use to implement transactions, of course you make the decision if you need to use transactions in .Net

image

Wednesday, 9 July 2008

Data Modelling Jazz

When we often think of data modelling it is a pretty picture created in Visio and for the some one more serious about data modelling it is representing entities, attributes and relationships in a meaningful manner. I didn’t realise modelling languages are different and a tool such as Visio supports/ works on such languages. e.g IDEFIX(Integration Definition for Information Modelling) is modelling language .. Woah that definition really woke me up .. and as usual this was developed in the US Airforce in 1985

The primary tool of a database designer is the data model. It’s such a great tool because it can show the details not only of single tables at a time, but the relationships between several
entities at a time. Of course it is not the only way to document a database;

• Often a product that features a database as the central focus will include a document that lists all tables, data types, and relationships. (developers think can’t be bothered)
• Every good DBA has a script of the database saved somewhere for re-creating the database. (developers think am still not bothered)
• SQL Server’s metadata includes ways to add properties to the database to describe the objects. (developers by now would think oh get a life will you).

Some common terms you would come across are Entities which are synonymous to tables in database, attributes which are synonymous to column definitions in a table and relationships represent how two entities relate to each other.  We represent these pictorially or grammatically in written text Anyway my idea of blogging about Data Modelling was to drop a few notes on some practices we could adopt while modelling data.

  • Entity names There are two ways you can go about these: plural or singular. Some argue tables names should be singular , but many feel that the table name refers to the set of rows and should be plural. Whatever convention you choose be consistent with it, mixing and matching could end up confusing the person reading the data model.
  • Attribute names: It’s generally not necessary to repeat the entity name in the attribute name, except for the primary key. The entity name is implied by the attribute’s inclusion in the entity. The chosen attribute name should reflect precisely what is contained in the attribute and how it relates to the entity.
  • Relationships: Name relationships with verb phrases, which make the relationship between a parent and child entity a readable sentence. The sentence expresses the
    relationship using the entity names and the relationship cardinality. The relationship sentence is a very powerful tool for communicating the purpose of the relationships with non technical members of the project team (e.g., customer representatives.
  • Domains: Define domains for your attributes, implementing type inheritance wherever possible to take advantage of domains that are similar. Using domains gives you a set of standard templates to use when building databases that ensures consistency across your database.
  • Objects: Define every object so it is clear what you had in mind when you created a given object. This is a tremendously valuable practice to get into, as it will pay off later when questions are asked about the objects, and it will serve as documentation to provide to other programmers and/or users.

Wednesday, 25 June 2008

Iteration Length?

Our organisation has always used two week iterations and sometimes we do wonder if two weeks is optimum, Well the first thought was for enhancement projects two weeks fits the bill, these could be projects where the solution to implement the features is available to the team, but for a brand new project which includes a technical learning curve and innovation, three weeks might be be apt. This is subject to discussion with the teams. Although Scum recommends a 30 day iteration, 2 week iterations yield results and 3 week iterations will yield innovative results. At the end of the day both iteration lengths are within the realm of Scrum and it is a variable available to teams and the product owner, how it is varied or used as long as the goals of the release are met, is not important.

Based on previous experience with two week iterations I have made some observations which kind of remind us the strength of the two week iteration

  • Two weeks is just about enough time to get some amount of meaningful development done.
  • Two week iterations indicate and provide more opportunities to succeed or fail.e.g, within a 90 day release plan, 5 - 2-week iterations of development and 1 stabilisation iteration at the end make it possible to have checkpoints on the way to the release.
  • The 2-week rhythm is a natural calendar cycle that is easy for all participants to remember and lines up well with typical 1 or 2 week vacation plans, simplifying capacity estimates for the team.
  • Velocity can be measured and scope can be adjusted more quickly.
  • The overhead in planning and closing an iteration is proportioned to the amount of work that can be accomplished in a two week sprint.
  • A two week iteration allows the team to break down work into small chunks where the define/build/test cycle is concurrent. With longer iterations, there is a tendency for teams to build a more waterfall-like process.
  • Margin of error for capacity planned and available is lesser in two week iterations.

Well the above is on the basis of what I have observed and may be different in your organisation.

Sunday, 15 June 2008

Enterprise Application Integration

I have been recently thinking about integration scenarios for different applications and how to take decisions on the way two applications should be integrated.. Infact my question is what are the factors i would consider to make a decision on how two applications will integrate. On the same note as much as there is a benefit that comes out of integration of two applications more often than not there is a resulting consequence based on how the applications integrated

Should the two applications be loosely coupled? This is first one that comes to my mind ,on most occasions the answer is yes it should be loosely coupled, the more loosely coupled the two applications are the more opportunities they have to extend there functionality without affecting each other and the integration itself. If tightly coupled its obvious that the integration breaks when applications change.

Simplicity, if we as developers minimize code involved in the integration of the applications, it becomes easily maintainable and provides better integration.

Integration technology Different integration techniques require varying amounts of specialized software and hardware. These special tools can be expensive, can lead to vendor lock-in, and increase the burden on developers to understand how to use the tools to integrate applications.

Data format , The format in which data is exchanged is important, and it should be borne in mind that the format should be compatible with an independent translator other than the integrating applications itself so that at some point these applications are also able to talk to other applications should there be a need. A related issue is data format evolution and extensibility and how the format can change over time and how that will affect the applications.

Data timeliness , we need to minimize the length of time between when one application decides to share some data and other applications have that data. Data should be exchanged in small chunks, rather than large sets. Latency in data sharing has to be factored into the integration design; the longer, the more opportunity for shared data to become stale, and the more complex integration becomes.

Data or functionality, Integrated applications may also wish to share functionality such that each application can invoke the functionality in the others. But this may have significant consequences for how well the integration works.

Asynchronicity, This is a aspect developers start realising after implementation and performance tests fail, By default we think and code synchronously,  This is especially true of integrated applications, where the remote application may not be running or the network may be unavailable, the source application may wish to simply make shared data available or log a request.

Friday, 16 May 2008

Databases and Software Development

Database is a form of software development and yet all too often the database is thought of as a secondary entity when development teams discuss architecture and test plans—many developers do not seem to believe, understand or leave alone feel the need to understand that standard software development best practices apply to database development. Virtually every application imaginable requires some form of data store. And many in the development community go beyond simply persisting data, creating applications that are data driven. Given this dependency upon data and databases, Data is the central factor that dictates the value any application can bring to its users. Without the data, there is no need for the application.

The very reason we use the word legacy to refer to an application as old as 3 years is more often than not because of the database. As applications grow in size with new features , the amount of thought put into refactoring front end code or developing new code by developers is not put into the database development, ,so what happens essentially is a situation where your front end is two years ahead of the database , and as time progresses your applications capabilities and features get pulled back by the limitations of database quality and standard. We can all deny this but in reality it results either in a limitation or extra cost on creating work arounds.

The central argument on many a database forum is what to do with that ever-present required logic. Sadly, try as we might, developers have still not figured out how to develop an application without the need to implement business requirements. And so the debate rages on. Does “business logic” belong in the database? In the application tier? What about the user interface? And what impact do newer application architectures have on this age-old question?In recent times I have been able to have a look at technologies like Astoria, Linq and the Entity Model framework and Katmai. I was amazed at how little or no database code needs to be written by a developer who is doing UI or business layer development in a software. At the same time being a SQL Server fan myself.. I was worried that my database skills will slowly vaporise into thin air. Hmm that's not as bad as it sounds. All these new technologies such as Astoria, Linq or the Entity Framework are abstractions of the database and allow developing a logical data layer which maps to a physical database , so all developers who work primarily in the UI level or business layer level will slowly stop doing any SQL code at all, instead churning code that interests them against a logical data layer, but what contradicts this is the need to learn a new syntax in the form Linq, . On the other hand, database development will shift to being the specialist job...the design development and management activities of the database developer and Administrator will begin emerging as specialist skills as opposed to generalist skills in the near future. The future of the database specialist seems to be bright.. but what is to be seen is how organisations look at this shift in software development ideology.. To be fair this model of specialist and generalists is not new..