Monday, November 5, 2007

Concept of Partition

A partition is a division of a logical database or its constituting elements into distinct independent parts.

Database partitioning is normally done for manageability, performance or availability reasons.

The partitioning can be done either building separate smaller databases (each with its own tables, indexes, and transaction logs), or by splitting selected elements, for example just one table.

Horizontal partitioning involves putting different rows into different tables. Perhaps customers with ZIP Codes less than 50000 are stored in CustomersEast, while customers with ZIP Codes greater than or equal to 50000 are stored in CustomersWest. The two partition tables are then CustomersEast and CustomersWest, while a view with a union might be created over both of them to provide a complete view of all customers.

Vertical partitioning involves creating tables with fewer columns and using additional tables to store the remaining columns. Normalization is a process that inherently involves vertical partitioning. Different physical storage might be used to realize vertical partitioning as well; storing infrequently used or very wide columns on a different device, for example, is a method of vertical partitioning. Done explicitly or implicitly, this type of partitioning is called "row splitting".

Criteria forPartitioning

Current high end relational database management systems provide for different criteria to split the database. They take a partitioning key and assign a partition based on certain criteria. Common criteria are:

Range partitioning

Selects a partition by determining if the partitioning key is inside a certain range. An example could be a partition for all rows where the column zipcode has a value between 70000 and 79999.

List partitioning

A partition is assigned a list of values. If the partitioning key has one of these values, the partition is chosen. For example all rows where the column Country is either Iceland, Norway, Sweden, Finland or Denmark could build a partition for the Nordic countries.

Hash partitioning

The value of a hash function determines membership in a partition. Assuming there are four partitions, the hash function could return a value from 0 to 3.

Composite partitioning allows for certain combinations of the above partitioning schemes, by for example first applying a range partitioning and then a hash partitioning.

Functional and Non-functional Requirements

  • Functional requirements:-

Functional requirements define the internal workings of the software: that is, the calculations, technical details, data manipulation and processing, and other specific functionality that show how the use cases are to be satisfied. They are supported by non-functional requirements, which impose constraints on the design or implementation (such as performance requirements, security, quality standards, or design constraints).

As defined in requirements engineering, functional requirements specify specific behaviors of a system. This should be contrasted with non-functional requirements which specify overall characteristics such as cost and reliability. (An alternative view is that functional requirements specify specific behavior while nonfunctionals provide adjectives which may be used to describe these behaviors.)

Typically, a requirements analyst generates functional requirements after building use cases. However this may have exceptions since software development is an iterative process and sometimes certain requirements are conceived prior to the definition of the use cases. Both artifacts (use cases documents and requirements documents) complement each other in a bidirectional process.

A typical functional requirement will contain a unique name and number, a brief summary, and a rationale. This information is used to help the reader understand why the requirement is needed, and to track the requirement through the development of the system.

The core of the requirement is the description of the required behavior, which must be a clear and readable description of the required behavior. This behavior may come from organizational or business rules, or it may be discovered through elicitation sessions with users, stakeholders, and other experts within the organization. Many requirements will be uncovered during the use case development. When this happens, the requirements analyst should create a placeholder requirement with a name and summary, and research the details later, to be filled in when they are better known.

Software requirements must be clear, correct, unambiguous, specific, and verifiable.


  • Non-functional requirements:-

In systems engineering and requirements engineering, non-functional requirements are requirements which specify criteria that can be used to judge the operation of a system, rather than specific behaviors. This should be contrasted with functional requirements that specify specific behavior or functions. Typical non-functional requirements are reliability, scalability, and cost. Non-functional requirements are often called the ilities of a system. Other terms for non-functional requirements are "quality attributes" and "quality of service requirements".

Examples

A system may be required to present the user with a real-time display of the number of records in a database. This is a functional requirement. In order to fulfill this requirement, the system architects must ensure that the database is capable of updating its record count within a predetermined response time - this is a non-functional requirement.

Sufficient network bandwidth may also be a non-functional requirement of a system.

Other examples:

  • Availability
  • Certification
  • Dependency on other parties
  • Documentation
  • Efficiency (resource consumption for given load)
  • Legal and licensing issues
  • Maintainability
  • Performance / Response time
  • Platform compatibility
  • Price
  • Resource constraints (processor speed, memory, disk space, network bandwidth etc. )
  • Safety
  • Scalability
  • Security
  • Software, tools, standards etc. Compatibility
  • Support issues
  • Usability by target user community

Process Analysis


An operation is composed of processes designed to add value by transforming inputs into useful outputs. Inputs may be materials, labor, energy, and capital equipment. Outputs may be a physical product (possibly used as an input to another process) or a service. Processes can have a significant impact on the performance of a business, and process improvement can improve a firm's competitiveness.

The first step to improving a process is to analyze it in order to understand the activities, their relationships, and the values of relevant metrics. Process analysis generally involves the following tasks:

· Define the process boundaries that mark the entry points of the process inputs and the exit points of the process outputs.

· Construct a process flow diagram that illustrates the various process activities and their interrelationships.

· Determine the capacity of each step in the process. Calculate other measures of interest.

· Identify the bottleneck, that is, the step having the lowest capacity.

· Evaluate further limitations in order to quantify the impact of the bottleneck.

· Use the analysis to make operating decisions and to improve the process.

Process Flow Diagram

The process boundaries are defined by the entry and exit points of inputs and outputs of the process.

Once the boundaries are defined, the process flow diagram (or process flowchart) is a valuable tool for understanding the process using graphic elements to represent tasks, flows, and storage. The following is a flow diagram for a simple process having three sequential activities:

Process Flow Diagram

The symbols in a process flow diagram are defined as follows:

· Rectangles: represent tasks

· Arrows: represent flows. Flows include the flow of material and the flow of information. The flow of information may include production orders and instructions. The information flow may take the form of a slip of paper that follows the material, or it may be routed separately, possibly ahead of the material in order to ready the equipment. Material flow usually is represented by a solid line and information flow by a dashed line.

· Inverted triangles: represent storage (inventory). Storage bins commonly are used to represent raw material inventory, work in process inventory, and finished goods inventory.

· Circles: represent storage of information (not shown in the above diagram).

In a process flow diagram, tasks drawn one after the other in series are performed sequentially. Tasks drawn in parallel are performed simultaneously.

In the above diagram, raw material is held in a storage bin at the beginning of the process. After the last task, the output also is stored in a storage bin.

When constructing a flow diagram, care should be taken to avoid pitfalls that might cause the flow diagram not to represent reality. For example, if the diagram is constructed using information obtained from employees, the employees may be reluctant to disclose rework loops and other potentially embarrassing aspects of the process. Similarly, if there are illogical aspects of the process flow, employees may tend to portray it as it should be and not as it is. Even if they portray the process as they perceive it, their perception may differ from the actual process. For example, they may leave out important activities that they deem to be insignificant.

Process Performance Measures

Operations managers are interested in process aspects such as cost, quality, flexibility, and speed. Some of the process performance measures that communicate these aspects include:

· Process capacity - The capacity of the process is its maximum output rate, measured in units produced per unit of time. The capacity of a series of tasks is determined by the lowest capacity task in the string. The capacity of parallel strings of tasks is the sum of the capacities of the two strings, except for cases in which the two strings have different outputs that are combined. In such cases, the capacity of the two parallel strings of tasks is that of the lowest capacity parallel string.

· Capacity utilization - the percentage of the process capacity that actually is being used.

· Throughput rate (also known as flow rate ) - the average rate at which units flow past a specific point in the process. The maximum throughput rate is the process capacity.

· Flow time (also known as throughput time or lead time) - the average time that a unit requires to flow through the process from the entry point to the exit point. The flow time is the length of the longest path through the process. Flow time includes both processing time and any time the unit spends between steps.

· Cycle time - the time between successive units as they are output from the process. Cycle time for the process is equal to the inverse of the throughput rate. Cycle time can be thought of as the time required for a task to repeat itself. Each series task in a process must have a cycle time less than or equal to the cycle time for the process. Put another way, the cycle time of the process is equal to the longest task cycle time. The process is said to be in balance if the cycle times are equal for each activity in the process. Such balance rarely is achieved.

· Process time - the average time that a unit is worked on. Process time is flow time less idle time.

· Idle time - time when no activity is being performed, for example, when an activity is waiting for work to arrive from the previous activity. The term can be used to describe both machine idle time and worker idle time.

· Work In process - the amount of inventory in the process.

· Set-up time - the time required to prepare the equipment to perform an activity on a batch of units. Set-up time usually does not depend strongly on the batch size and therefore can be reduced on a per unit basis by increasing the batch size.

· Direct labor content - the amount of labor (in units of time) actually contained in the product. Excludes idle time when workers are not working directly on the product. Also excludes time spent maintaining machines, transporting materials, etc.

· Direct labor utilization - the fraction of labor capacity that actually is utilized as direct labor.

Little's Law

The inventory in the process is related to the throughput rate and throughput time by the following equation:

W.I.P. Inventory = Throughput Rate x Flow Time

This relation is known as Little's Law, named after John D.C. Little who proved it mathematically in 1961. Since the throughput rate is equal to 1 / cycle time, Little's Law can be written as:

Flow Time = W.I.P. Inventory x Cycle Time

The Process Bottleneck

The process capacity is determined by the slowest series task in the process; that is, having the slowest throughput rate or longest cycle time. This slowest task is known as the bottleneck. Identification of the bottleneck is a critical aspect of process analysis since it not only determines the process capacity, but also provides the opportunity to increase that capacity.

Saving time in the bottleneck activity saves time for the entire process. Saving time in a non-bottleneck activity does not help the process since the throughput rate is limited by the bottleneck. It is only when the bottleneck is eliminated that another activity will become the new bottleneck and present a new opportunity to improve the process.

If the next slowest task is much faster than the bottleneck, then the bottleneck is having a major impact on the process capacity. If the next slowest task is only slightly faster than the bottleneck, then increasing the throughput of the bottleneck will have a limited impact on the process capacity.

Starvation and Blocking

Starvation occurs when a downstream activity is idle with no inputs to process because of upstream delays. Blocking occurs when an activity becomes idle because the next downstream activity is not ready to take it. Both starvation and blocking can be reduced by adding buffers that hold inventory between activities.

Process Improvement

Improvements in cost, quality, flexibility, and speed are commonly sought. The following lists some of the ways that processes can be improved.

· Reduce work-in-process inventory - reduces lead time.

· Add additional resources to increase capacity of the bottleneck. For example, an additional machine can be added in parallel to increase the capacity.

· Improve the efficiency of the bottleneck activity - increases process capacity.

· Move work away from bottleneck resources where possible - increases process capacity.

· Increase availability of bottleneck resources, for example, by adding an additional shift - increases process capacity.

· Minimize non-value adding activities - decreases cost, reduces lead time. Non-value adding activities include transport, rework, waiting, testing and inspecting, and support activities.

· Redesign the product for better manufacturability - can improve several or all process performance measures.

· Flexibility can be improved by outsourcing certain activities. Flexibility also can be enhanced by postponement, which shifts customizing activities to the end of the process.

In some cases, dramatic improvements can be made at minimal cost when the bottleneck activity is severely limiting the process capacity. On the other hand, in well-optimized processes, significant investment may be required to achieve a marginal operational improvement. Because of the large investment, the operational gain may not generate a sufficient rate of return. A cost-benefit analysis should be performed to determine if a process change is worth the investment. Ultimately, net present value will determine whether a process "improvement" really is an improvement.

Business Systems Analyst Responsibilities

Business Systems Analyst Responsibilities


Job Responsibilities

  • Act as a strategic partner between the business community and IT development teams to resolve functional and technical issues related to business applications, to troubleshoot data or transaction issues, and to review opportunities to leverage new functionality
  • Drive efficiency and operational improvement through business process definition, system alignment, and optimization of standard business application functionality.
  • Identify gaps between the current deployment of applications and future requirements that have evolved due to organizational growth, changes, or strategy. Translate business requirements into system definitions and solutions.
  • Lead cross-functional efforts to address business process or systems issues
  • Analyze requests or requirements for application patches or upgrades to determine impact to business and integrated systems
  • Comprehensive project management of new business application initiatives, performing requirements gathering, development effort estimates, resource management, gap analysis, implementation configuration, scope control, testing, training and end-user support, according to project methodology
  • Work with business community to document functional test scenarios, test plans, and end-user acceptance testing criteria


  • Participate in technical design sessions, working with technical resources, to provide insight during solution development
  • Identify and communicate project risks and recommend solutions
  • Designing, interpreting, or using complex logical data and object models to guide technical design decisions and overall business applications strategy
  • Provide support during period close and other major financial milestones of the company
  • Provide ad hoc data queries or reports to the business for analysis (using TOAD, Hyperion, or other query tools)
  • Promote use and acceptance of project methodology and documentation standards

Qualifications Requirements

  • Deep business applications experience (including Oracle Financials, Purchasing, Inventory, BOM, and Order Management)
  • Strong financial accounting knowledge and industry experience
  • Full analytical capability based on understanding of technical architecture and query tools
  • Exceptional leadership, written and oral communication, and meeting facilitation skills
  • Experience in software implementations, requirements gathering, systems analysis, and functional design
  • Ability to communicate effectively with both business and technical staff to convey complex ideas both verbally and in written form
  • Ability to quickly grasp concepts relating to customizations that have been designed and developed.
  • Ability to translate business requirements into high-level and detailed functional specifications.
  • Exposure to various project management methodologies and their application to cross-functional project work.
  • Demonstrated success in leading a team, with both functional and technical resources, to address cross-functional issues
  • CPA or related background a plus

Business analyst interview questions

You never know what you will be asked on a job interview. The following sample of interview questions for business analyst will help you prepare. You need to be able to answer all questions truthfully and professionally. Here are the business analyst interview questions:

Q. Can you tell me why are you considering leaving your present job?
A. Regardless of the reason, do not bad mouth your current employer. Negativism will always hurt you. Good answers include: “There is no room for growth at my current employer. I am looking for a company with long term growth opportunities”. “Due to a company restructuring, my entire department is relocating to Florida. I was give the option of moving, but do not wish to relocate”. “My current company is not doing well, and has been laying off employees. There is no job security there, and more layoffs are expected”.

Q. How do you handle stress and pressure?
A. “I find that I work better under pressure, and I enjoy working in an environment that is challenging.” “I am the type of person that diffuses stress. I am used to working in a demanding environment with deadlines, and enjoy the challenges.”

Q. We have met several business analyst’s. Why are you the one we should hire?
A. Give definite examples of your skills and accomplishments. Be positive, and emphasize how your background matches the job description. Mention any software packages and spreadsheet software you are familiar with. Also let them know if you have advanced knowledge of any of the software.

Q. What do you know about our company?
A. This question is used to see if you have prepared for the interview. Candidates that have researched the company are more appealing. Companies like prepared, organized candidates.

Q. What are your greatest strengths?
A. Be positive and honest. “My greatest strength is maximizing the efficiency of my staff. I have successfully lead numerous teams on difficult projects. I have an excellent ability to identify and maximize each of my staffs strengths.” Give examples.

Q. Tell me about your greatest weakness?
A. It is very important to give a strength that compensates for your weakness. Make your weakness into a positive. “I consider myself a 'big picture' person. I sometimes skip the small details. For this reason, I always have someone on my team that is very detail oriented.” Another good answer: “Sometimes, I get so excited and caught up in my work that I forget that my family life should be my number one priority.”

Q. What are your goals for the future?
A. “My long term goals are to find a company where I can grow, continue to learn, take on increasing responsibilities, and be a positive contributor”.

Hopefully these typical business analyst interview questions will help you. It is important to customize the answers for your specific background and experience.

Now that we have gone over the interview questions for business analyst, you need to be aware of important resources that can make your job search easier and more thorough.

What can a Business Analyst do differently than project मेनेजर?

A project/program manager, as the name suggests, is mostly concerned with the progress of the entire project and taking care of the project members. This includes cost management (invoicing, billing), time management (scheduling), risk management (project closure) and similar things.

A business analyst is mostly concerned with gathering and documenting business requirements (application requirements) and communicating them to the development and test teams.

Hope this clarifies your question.

Business analyst interview questions

You never know what you will be asked on a job interview. The following sample of interview questions for business analyst will help you prepare. You need to be able to answer all questions truthfully and professionally. Here are the business analyst interview questions:

Q. Can you tell me why are you considering leaving your present job?







A. Regardless of the reason, do not bad mouth your current employer. Negativism will always hurt you. Good answers include: “There is no room for growth at my current employer. I am looking for a company with long term growth opportunities”. “Due to a company restructuring, my entire department is relocating to Florida. I was give the option of moving, but do not wish to relocate”. “My current company is not doing well, and has been laying off employees. There is no job security there, and more layoffs are expected”.

Q. How do you handle stress and pressure?
A. “I find that I work better under pressure, and I enjoy working in an environment that is challenging.” “I am the type of person that diffuses stress. I am used to working in a demanding environment with deadlines, and enjoy the challenges.”

Q. We have met several business analyst’s. Why are you the one we should hire?
A. Give definite examples of your skills and accomplishments. Be positive, and emphasize how your background matches the job description. Mention any software packages and spreadsheet software you are familiar with. Also let them know if you have advanced knowledge of any of the software.

Q. What do you know about our company?
A. This question is used to see if you have prepared for the interview. Candidates that have researched the company are more appealing. Companies like prepared, organized candidates.

Q. What are your greatest strengths?
A. Be positive and honest. “My greatest strength is maximizing the efficiency of my staff. I have successfully lead numerous teams on difficult projects. I have an excellent ability to identify and maximize each of my staffs strengths.” Give examples.

Q. Tell me about your greatest weakness?
A. It is very important to give a strength that compensates for your weakness. Make your weakness into a positive. “I consider myself a 'big picture' person. I sometimes skip the small details. For this reason, I always have someone on my team that is very detail oriented.” Another good answer: “Sometimes, I get so excited and caught up in my work that I forget that my family life should be my number one priority.”

Q. What are your goals for the future?
A. “My long term goals are to find a company where I can grow, continue to learn, take on increasing responsibilities, and be a positive contributor”.

Hopefully these typical business analyst interview questions will help you. It is important to customize the answers for your specific background and experience.

Now that we have gone over the interview questions for business analyst, you need to be aware of important resources that can make your job search easier and more thorough.

Monday, May 14, 2007

What are the Differences Between SQL Server 2000 and SQL Server 2005?

What are the Differences Between SQL Server 2000 and SQL Server 2005?

I've been asked this question every time that there's a new version and yet I've never been able to give what I think is a nice, concise, logical answer that satisfies the asker. Probably it's a lack of my ability to easily form words in my mouth and get them out in the proper order, so I decided it might make some sense to do this on paper (metaphorically speaking) and help others out.

Like many of you, I usually get this question from someone outside of SQL Server. A windows admin, a network guy, etc., someone who has little contact with SQL Server. Or maybe it's someone who's been stuck with admin'ing a SQL Server instance.

In any case, I wanted to try and explain this concisely for the non-DBAs. As I began this project, however I soon realized that it's not easy to just give a good general answer. As with everything else in SQL Server it seems that "it depends" is the best general answer, so I broke this up into a few areas. This part will look at the administrative differences and the next will cover more of the development differences.

The Administrative Differences

Administering a SQL Server instance to me means making sure the server service runs efficiently and is stable and allows clients to access the data. The instance should keep data intact and function according to the rules of the code implemented while being well maintained.

Or for the non-DBAs, it means that you are the sysadmin and it just works.

The overall differences are few. Sure we use Management Studio instead of Enterprise Manager, but that's not really a big deal. Really many of the changes, like being able to change connections for a query, are superficial improvements that don't really present a substantial change. If you think they do, you might be in the wrong job.

Security is one area that is a very nice improvement. The separation of the schema from the owner makes administrative changes easier, but that is a big deal because it greatly increases the chances you won't keep an old account active because it's a pain to change owners on objects. There's also more granularity and ease of administration using the schema as another level of assigning permissions.

Another big security change is the ability to secure your web services using certificates instead of requiring authentication using a name and password. Add to that the capability to encrypt data, and manage the keys, can make a big difference in the overall security of your data. You have to carefully ensure your application and access is properly secured, but just the marketing value of encryption when you have credit card, financial, or medical data is huge. SQL Server 2000 had no real security features for data, allowing an administrator to see all data. You could purchase a third party add-on, but it was expensive and required staff training. Not that you don't need to learn about SQL Server 2005, but it should be a skill that most DBAs will learn and be able to bring to your organization over time.

High availability is becoming more and more important to all sizes of businesses. In the past, clustering or log shipping were your main choices, but both were expensive and required the Enterprise Edition. This put these features out of the reach of many companies, or at least, out of many DBAs' budgets. With SQL Server 2005, you can now implement clustering, log shipping, or the new Database Mirroring with the Standard edition. With the ability of Database Mirroring to use commodity hardware, even disparate hardware between the primary and mirror databases, this is a very reasonable cost solution for almost any enterprise.

There are also online indexes, online restores, and fast recovery in the Enterprise Edition that can help ensure that you take less downtime. Fast recovery especially can be an important feature, allowing the database to be accessed as the undo operations start. With a log of open transactions when a database is restarted, this can really add up to significant amounts of time. In SQL Server 2000, you had to have a complete, intact database before anyone could access it. With redo/undo operations sometimes taking a significant amount of time, this could delay the time from Windows startup to database availability by minutes.

Data sizes always grow and for most companies, performance is always an issue on some server. With SQL Server 2000, you were limited to using 2GB of RAM and 4 CPUs on the Standard Edition. The number of CPUs hasn't changed, but you can now use as much RAM as the OS allows. There also is no limit to the database size, not that the 1,048,516 TB in SQL Server 2000. Since RAM is usually a limiting factor in the performance of many databases, upgrading to SQL Server 2005 could be something you can take advantage of. SQL Server 2005 also has more options and capabilities on the 64-bit platform than SQL Server 2000.

Why Upgrade?

This is an interesting question and one I've been asked quite a bit over the last 18 months since SQL Server 2005 has been released. The short answer is that if SQL Server 2000 meets your needs, then there's no reason to upgrade. SQL Server 2000 is a strong, stable platform that has worked well for millions of installations. If it meets your needs, you are not running up against the limits of the platform, and you are happy with your system, then don't upgrade.

However, there is a caveat to this. First the support timeline for SQL Server 2000 shows mainstream support ending next year, in April 2008. I can't imagine that Microsoft wouldn't extend that given the large number of installations of SQL Server 2000, but with the next version of SQL Server likely to come out next year, I can see this being the point at which you cannot call for regular support. The extended support timeline continues through 2013, but that's an expensive option.

The other consideration is that with a new version coming out next year, you might want to just start making plans to upgrade to that version even if you're happy with SQL Server 2000. If the plan is to release a new version every 2-3 years, you'll need to upgrade at least every 5-6 years to maintain support options.

Be sure that in any case you are sure the application you are upgrading, if it's a third party, is supported on SQL Server 2005.

Lastly, if you have multiple servers and are considering new hardware for more than 1 of them, it might make some sense to be sure to look at buying one large 64-bit server and performing some consolidations. I might recommend that you wait for the next version of SQL Server if you are worried about conflicts as I have heard rumors of switches to help govern the resource usage in Katmai (SQL Server 2008).

A quick summary of the differences:

Feature SQL Server 2000 SQL Server 2005
Security Owner = Schema, hard to remove old users at times Schema is separate. Better granularity in easily controlling security. Logins can be authenticated by certificates.
Encryption No options built in, expensive third party options with proprietary skills required to implement properly. Encryption and key management build in.
High Availability Clustering or Log Shipping require Enterprise Edition. Expensive hardware. Clustering, Database Mirroring or Log Shipping available in Standard Edition. Database Mirroring can use cheap hardware.
Scalability Limited to 2GB, 4CPUs in Standard Edition. Limited 64-bit support. 4 CPU, no RAM limit in Standard Edition. More 64-bit options offer chances for consolidation.

Conclusion

These seem to be the major highlights from my perspective as an administrator. While there are other improvements, such as the schema changes flowing through replication, I'm not sure that they represent compelling changes for the non-DBA.

In the next article, I'll examine some of the changes from a developer perspective and see if any of those give you a reason to upgrade.

And I welcome your comments and thoughts on this as well. Perhaps there are some features I've missed in my short summary.

The Secrets of Great Due Diligence

The Secrets of Great Due Diligence

This is in continuation of self-study material, I have searched during my exploration of new ideas and knowledge base. My intention is just to compile related material at one place for everyone.
Food for Thoughts.



With Thanks:
A Harvard Business Review excerpt.



Sealing the deal is the easy part. But first comes due diligence. Here's how to calculate your target's stand-alone value.


Deal making is glamorous; due diligence is not. That simple statement goes a long way toward explaining why so many companies have made so many acquisitions that have produced so little value. Although big companies often make a show of carefully analyzing the size and scope of a deal in question—assembling large teams and spending pots of money—the fact is, the momentum of the transaction is hard to resist once senior management has the target in its sights. Due diligence all too often becomes an exercise in verifying the target's financial statements rather than conducting a fair analysis of the deal's strategic logic and the acquirer's ability to realize value from it. Seldom does the process lead managers to kill potential acquisitions, even when the deals are deeply flawed. [...]

What can companies do to improve their due diligence? To answer that question, we've taken a close look at twenty companies—both public and private—whose transactions have demonstrated high-quality due diligence. We calibrated our findings against our experiences in 2,000-odd deals we've screened over the past ten years. We've found that successful acquirers view due diligence as much more than an exercise in verifying data. While they go through the numbers deeply and thoroughly, they also put the broader, strategic rationale for their acquisitions under the microscope. They look at the business case in its entirety, probing for strengths and weaknesses and searching for unreliable assumptions and other flaws in the logic. They take a highly disciplined and objective approach to the process, and their senior executives pay close heed to the results of the investigations and analyses—to the extent that they are prepared to walk away from a deal, even in the very late stages of negotiations. For these companies, due diligence acts as a counterweight to the excitement that builds when managers begin to pursue a target.

The successful acquirers we studied were all consistent in their approach to due diligence. Although there were idiosyncrasies and differences in emphasis placed on their inquiries, all of them built their due diligence process as an investigation into four basic questions:

  • What are we really buying?

  • What is the target's stand-alone value?

  • Where are the synergies—and the skeletons?

  • What's our walk-away price?

[Here] we'll examine each of these questions in depth, demonstrating how they can provide any company with a solid framework for effective due diligence. [...]

Once the wheels of an acquisition are turning, it becomes difficult for senior managers to step on the brakes.

What is the target's stand-alone value?
Once the wheels of an acquisition are turning, it becomes difficult for senior managers to step on the brakes; they become too invested in the deal's success. Here, again, due diligence should play a critical role by imposing objective discipline on the financial side of the process. What you find in your bottom-up assessment of the target and its industry must translate into concrete benefits in revenue, cost and earnings, and, ultimately, cash flow. At the same time, the target's books should be rigorously analyzed not just to verify reported numbers and assumptions but also to determine the business's true value as a stand-alone concern. The vast majority of the price you pay reflects the business as is, not as it might be once you've won it. Too often the reverse is true: The fundamentals of the business for sale are unattractive relative to its price, so the search begins for synergies to justify the deal.

Of course, determining a company's true value is easier said than done. Ever since the old days of the barter economy, when farmers would exaggerate the health and understate the age of the livestock they were trading, sellers have always tried to dress up their assets to make them look more appealing than they really are. That's certainly true in business today, when companies can use a wide range of accounting tricks to buff their numbers. Here are just a few of the most common examples of financial trickery used:

  • Stuffing distribution channels to inflate sales projections. For instance, a company may treat as market sales many of the products it sells to distributors—which may not represent recurring sales.

  • Using overoptimistic projections to inflate the expected returns from investments in new technologies and other capital expenditures. A company might, for example, assume that a major uptick in its cross selling will enable it to recoup its large investment in customer relationship management software.

  • Disguising the head count of cost centers by decentralizing functions so you never see the full picture. For instance, some companies scatter the marketing function among field offices and maintain just a coordinating crew at headquarters, which hides the true overhead.

  • Treating recurring items as extraordinary costs to get them off the P&L. A company might, for example, use the restructuring of a sales network as a way to declare bad receivables as a onetime expense.

  • Exaggerating a Web site's potential for being an effective, cheap sales channel.

  • Underfunding capital expenditures or sales, general, and administrative costs in the periods leading up to a sale to make cash flow look healthier. For example, a manufacturer may decide to postpone its machine renewals a year or two so those figures won't be immediately visible in the books. But the manufacturer will overstate free cash flow—and possibly mislead the investor about how much regular capital a plant needs.

  • Encouraging the sales force to boost sales while hiding costs. A company looking for a buyer might, for example, offer advantageous terms and conditions on postsale service to boost current sales. The product revenues will show up immediately in the P&L, but the lower profit margin on service revenues will not be apparent until much later.

To arrive at a business's true stand-alone value, all these accounting tricks must be stripped away to reveal the historical and prospective cash flows. Often, the only way to do this is to look beyond the reported numbers—to send a due diligence team into the field to see what's really happening with costs and sales.

That's what Cinven, a leading European private equity company, did before acquiring Odeon Cinemas, a UK theater chain, in 2000. Instead of looking at the aggregate revenues and costs, as Odeon reported them, Cinven's analysts combed through the numbers of every individual cinema in order to understand the P&L dynamics at each location. They were able to paint a rich picture of local demand patterns and competitor activities, including data on attendance, revenues, operating costs, and capital expenditures that would be required over the next five years. This microexamination of the company revealed that the initial market valuation was flawed; estimates of sales growth at the national level were not justified by local trends. Armed with the findings, Cinven negotiated to pay £45 million less than the original asking price.

Getting ground-level numbers usually requires the close cooperation of the acquisition target's top brass. An adversarial posture almost always backfires. Cinven, for example, took pains to explain to Odeon's executives that a deep understanding of Odeon's business would help ensure the ultimate success of the merger. Cinven and Odeon executives worked as a team to examine the results of each cinema and to test the assumptions of Odeon's business model. They held four daylong meetings in which they went through each of the sites and agreed on the most important levers for revenue and profit growth in the local markets. Although the process may strike the target company as excessively intrusive, target managers will find there are a number of benefits to going along with it beyond pleasing a potential acquirer. Even if the deal with Cinven had fallen apart, Odeon would have emerged from the deal's due diligence process with a much better understanding of its own economics.

Of course, no matter how friendly the approach, many targets will be prickly. The company may have something to hide. Or the target's managers may just want to retain their independence; people who believe that knowledge is power naturally like to hold on to that knowledge. But innocent or not, a target's hesitancy or outright hostility during due diligence is a sign that a deal's value will be more difficult to realize than originally expected. As Joe Trustey, managing partner of private equity firm Summit Partners, says: "We walk away from a target whose management is uncooperative in due diligence. For us, that's a deal breaker."

Principles of Negotiating

27 Principles of Negotiating

Feb 1, 2003 12:00 PM, RCM Staff Report

Basics to Keep in Mind

  1. If you ask for something before a contract is signed, it's called “negotiating.” If you ask for something after a contract is signed, it's called “begging.” It's better to be a good negotiator than an expert beggar.

  2. From negotiator Chester Karras: “You don't get what you deserve, you get what you negotiate.”

  3. From motivational expert Zig Ziglar: “You can get anything in life, if you help enough other people get what they want.”

  4. Everything is negotiable, but everything has a price.

  5. Quoted prices are invitations to buy, but not statements of value.

Important Fundamentals

  1. Terms are just as important as dollars. Many people focus on rates, dates, and space (the big three of meeting planning), but the other fine print — such as liability and attrition — can have just as much importance. These things will translate into dollars.

  2. Negotiate at the proper authority level. Negotiate with the person who can say “yes.” Don't let your negotiation get lost in the translation. You don't want to have to negotiate it more than once. Ask to negotiate with someone who has the authority to go “off the script” or the rate card. Refuse to negotiate with someone who doesn't have that authority.

  3. If you want something, ask for it. Good negotiators do not put their best terms on the table first.

  4. Focus on the relationship. It's important that the relationship is still there once you're through with the negotiations. You don't want to get to the end of an agreement and never want to see each other again.

The Four Unwritten Rules

<A TARGET="_blank" HREF="http://ad.doubleclick.net/click%3Bh=v8/350e/3/0/%2a/r%3B55417913%3B0-0%3B1%3B6912568%3B4307-300/250%3B18859635/18877530/1%3B%3B%7Esscs%3D%3fhttp://www.ahl-harrisonprinceton.com"><IMG SRC="http://m1.2mdn.net/1264125/HCCH-MtgNet-300x2501.gif" BORDER=0></A>

In every negotiation, there are four unwritten variables. All exist in every negotiation, whether or not you know or understand that.

  1. Power

    This is the ability to get the other side to do things in the way you see favorable. The top two power sources are competition and the printed word. If a hotel knows that four other hotels in town want your business, then that hotel likely will want your business, too. Hotels play that game, too. They try to get more than one group interested in the hotel. And remember: Always question the printed word. Printed rates are not final rates.

  2. Time

    Ninety percent of the negotiating happens in the last 10 percent of the time allotted. Negotiating will go on forever unless one side imposes a deadline. The corollary is that time works against the person who doesn't have it. Never reveal your real deadline, and never negotiate when you're in a hurry.

  3. Knowledge

    Knowledge is a combination of expertise and information-gathering regarding the wants and needs of the other side. How and when is the person you're dealing with evaluated? How experienced is the person? What's the hotel's average daily rate, its peak season, and does it have other customers who want the same dates?

  4. Leverage

    Leverage is your ability to get the hotel to want your business and to give you favorable terms.

Negotiating Gambits

Beginning Gambits occur at the start of negotiations.

  1. The Flinch

    Most religious meeting planners are born with this: the ability to express shock and dismay at what the other side is presenting. This technique forces the other side to adjust.

  2. Feel/Felt/Found Technique

    This is a way of acknowledging another person's feelings without giving any ground. It's also a way to disagree without being disagreeable. Here's the script: “I understand how you feel. Others have felt the same way, but when they have found out more about us, they have come around.”

  3. First Offers

    The general rule is to never accept the first offer.

  4. The Vise

    The purpose of the vise is to squeeze the price range up or down in your favor. When someone names a price, you say: “You'll have to do better than that.” But be prepared for the response: “How much better do I have to do?”

Middle Gambits occur during the middle of negotiations, the point at which most negotiations begin to stall. Middle gambits are used to keep things going, assuming that you want to do business with this party. There are two basic techniques.

  1. The Trade-Off

    Never give a concession without getting a concession. This is the secret to keeping a negotiation balanced. It keeps the other side from nibbling you to death. They know they'll have to give up something for everything they get.

  2. The Set-Aside

    When you're deadlocked on an issue, set it aside and come back to it after you've reached agreement on the easier issues. Why leave the toughest issues for last? Because by the end of negotiations, the process has momentum and both sides will have the motivation to be flexible.

Ending Gambits are the end games.

  1. BATNA

    When you reach the end and are asking yourself if you should go through with what you've negotiated, ask yourself: “What's my Best Alternative To a Negotiated Agreement?”

  2. The Walk-Away

    Your ability to negotiate is tied to your ability to walk away from the deal. This is why you want to give yourself options.

Requirements Elicitation

Requirements Elicitation

Summary

An overview of the Requirements Elicitation problem, emphasising its difficulties, is followed by a description of the Active Structure approach to generating and maintaining complex specifications.

Introduction

Requirements Elicitation might be described as eliciting a specification of what is required by allowing experts in the problem domain to describe the goals to be reached during the problem resolution. This being so, we may be limited to having a vague desire at the beginning of the process, such as "We want a new aircraft carrier", and at the end of the process having a detailed description of the goals and some clear idea of the steps necessary to reach the goals.

There are several things wrong with this description. Where does the logical model reside - in people’s heads? Is there an expert with sufficient breadth and depth of domain knowledge to ensure the goal and all its subgoals are consistent and achievable? If there is not, are we merely leaving to the design stage the process of systematising the subgoals. It is very likely that many goals will be inconsistent, even deliberately contradictory. Can we say the Requirements Elicitation stage is complete while this is so? We certainly can if our methods for supporting the elicitation have no means of establishing the consistency of the goals, or even describing many of them. How precise do we need to be in specifying our goals. Is there anything left to do in the design stage, or have we gazumped it by not having a method of description that permits variability in the requirements.

Let’s use an example of a helicopter. Before the first helicopter had flown, and people had become familiar with its performance envelope, how successful would requirements elicitation have been in eliciting a consistent and achievable set of goals. An airborne vehicle which could hover, move up, down, sideways, even backwards. The models in people’s heads would have veered among : a hot air balloon, a hummingbird, a hovering hawk, a winged craft of the time, like a DC3. Until at least a rough performance envelope appeared, to shape and limit what people thought, requirements elicitation would have been unlikely to have been successful if it had relied only on the users (what users?) to provide a consistent mental model. Does it turn out that Requirements Elicitation only works well when people know almost exactly what they want, and hardly works at all when there is significant design required to move from a concept with very hazy boundaries to an object within the limits of current or very near-term technology.

Let’s go back to the aircraft carrier. It’s 20 years since we built one. How do we gather experts in the design, building and use of such an object. Are the mental models of at least some of the experts mired in the past, in terms of ship propulsion, low speed abilities of fighter aircraft, weapons systems command and control, vulnerability to attack? We can mix in experts in new fields with no integration experience, but how to get a consistent mix of old and new? One way is to build an Active Structure as we gather requirements. Some typical requirements:

Role - why a sea based platform is necessary

Capabilities - number and type of aircraft, range, speed, sea keeping.

Survivability - who is attacking it and its permissible failure space

How much

How long to service

Service life

Are there requirements, like why are we building it, we won’t attempt to formalise, or even mention? Will requirements that no-one chooses to mention bring the project down later on? How far will we go out into the larger system of which this is a component to understand and validate the decisionmaking?

The area receiving most attention for the use of Requirements Elicitation is software systems. We do other types of complex system badly, but their physical reality restricts the grossness of the errors we commit. Software systems can be more complex than any other system we attempt to build, but our difficulty in visualising their behaviour can lead to the grossest design errors. Even if we do them well, their nature allows a good idea to radically alter the topology of their structure, invalidating much of the analysis that was supporting them.

The end result of eliciting requirements needs to be a compromise among competing requirements. Everyone in the group may wish to have a voice, but this may leave us with a mishmash, disenfranchising those who are not present at a later point where consistency is enforced. The sooner we can impose some discipline on the requirements, the sooner people are pushed to expose what it is they really need. A way to do this is to ensure we do not allow inconsistent requirements to propagate in parallel for very long. The longer they propagate, the more elaboration, consensual agreement and frozen structure around them, the harder they are to root out. A planning system which makes no attempt to model consequences of decisions contributes to the management problem by allowing user experts to build expectations on poor foundations.

Imposing a Structure

There have been many attempts to impose some particular structure on the high level planning process. The chosen structures are often those that people happen to be familiar with, whether or no they are relevant to the problem - from decision trees to expert systems to object oriented hierarchies. While it is certainly worthwhile to systematise what we do, the imposition of a structure which is rigid can drive the planning process in undesirable directions. As example, a rich and detailed hierarchy for motor vehicle design may be specifying the glove box material before the engine placement or even type is decided. The apparent rapidity and completeness of the design process masks the fact that the design was already implicit in the structure we adopted - that is, no new design above the trivial level can occur because cross connections are not permissible in the hierarchical structure. Successful designs come from a rethink of the requirements to avoid what seems necessary but is not, or a realisation that new materials or configurations or processes are economically viable - a nonstick frypan, an east-west engine, the cellular phone. The topology of a logical model of the requirements is so fluid in the early stages of specification that any attempt to impose an alien structure is doomed to failure.

It may be that the requirements elicitation process is of its nature limited to copying an existing design or choosing among a few well known alternatives. Then a simple predetermined structure may be possible. Even here, the structure will need to adapt as competing requirements are compromised out. In all but the simplest cases, the variability in the topology defeats rigid preconceived notions of a directed decision structure, because there is no stage to which we may not be driven back in the search for a solution. There is a structure, the structure of the relationships that make up the problem. Using it may require us to think more flexibly than we might like, because nothing is certain.

The Planning Spectrum

The initial requirements starts at one, "We want a new carrier", has some outline by the time the number of requirements is ten or fifty, and the final detailed requirements might number fifty thousand, and require continual maintenance through the entire planning process. What method is available to support the move from one end of the spectrum to the other, or at least to the point where all variability has gone? Do we need a tool for Requirements Elicitation, another for high level design, another for monitoring development. All of these steps are affecting the requirements.

In the later stages we may not be able to handle the sheer weight of detail in a flexible way, but that does not mean we cannot handle the control aspects with the same tool that supported the RE step. In the diagram, we are ignoring the step before Requirements Elicitation, the step that added the proposal to the program in the first place. Planning support can extend back to encompass this stage as well. At each stage boundary, we are passing across, and can pass back, a web of constraints on what is proposed at the particular stage.

What Are The Requirements

Why not elicit the requirements for the Requirements Elicitation process?

We would like to build a logical/existential model as we go, with continuous Truth Maintenance so inconsistent requirements are weeded out as soon as possible.

We need a language of approximate discourse with designers and other stakeholders. We want to tell people where we would like to be on the cost/capability curve, not screw everything down to precise values that take on holy writ, then turn out to be unachievable or wrong.

We should be able to quantify things, but may wish to approximate, so a range can be accepted and used for computation of alternatives. We might be specifying integers, 40-70 aircraft, or real numbers, 123.3-140.7 tons of fuel per hour.

Not everything we want to specify will be analysable - sometimes it will be preferable to use stochastic methods of specification instead of researching relationships, sometimes we will have no choice. We should be able to smoothly integrate analysis with piecewise approximation and probabilistic methods.

What structure should we use? Why not just use the structure of the requirements, because any other structure will either be restrictive, or we will need to change it as we go along. Adding requirements changes the topology of the structure. If we choose to link aircraft numbers to hull speed, that is what we want. There may be a carrier solution with 40 planes and fifty knots, as well as the conventional solution with 75 planes and 20 knots, but we want to rule out 40 planes and 20 knots.

We may wish to hinge one requirement off another or link them together in some way, if this then that, but still at the tentative stage, exercising logical or existential control over one or more requirements by outcomes of other requirements, and vice versa.

The requirements may start off planar, a few scribbled on a whiteboard, but will rapidly develop a layered structure, where subgroups of experts are defining requirements for reasonably independent components. The highest level requirements that we are working on may well turn out to be a component of some even higher level requirement (someone has to sell the acquisition budget, of which our carrier is a part), so the language of requirements description should permit unlimited layering, both up and down from where we notionally started.

We should be able to link across components from any level to any level. There is a weak form of inheritance in the structure - the propulsion inherits the weight and the hull shape, but in most cases the inheritance is circular - the weight is assumed to be in the range of the proposed propulsion. It is the long range connections among largely independent entities that are the most valuable to describe, as they are likely to be least understood by experts in the particular specialty.

We would like to start with one requirement and reach tens of thousands. As the outlines firm up, we want to keep the potential variability in the components. Occasionally we will need to back out of a dead end, where what we thought possible is found not to be so within the cost/time frame allowable. That means requirements that have long since been frozen anywhere in the overall specification may need to become as malleable as they were at first, while we search for another consistent configuration to which to jump. This is typical of design - there are small islands of success in an ocean of failure.

Time and money are fundamental requirements, just as much as speed and range. Their specification should not have the rigidity of conventional project planning methods, certainly not while we are determining whether an outcome is even possible.

The requirements should encompass everything that will lead to success or failure, and allow a wide range of stakeholders to see that their requirements are influencing the outcome.

The Network Approach

A logical network (we mean a network combining logic and existence and time) of analytic operators acting as undirected agents would seem to meet all of the requirements stated. This assumes that analysis of the requirements is worthwhile or even possible. Some of the requirements will not be analysable - they may still be modellable in an analytic system, using approximation methods.

The structure of the logical network comes only from the structure of the statements used to represent the requirements. The statements link objects through analytic operators under logical control. A seemingly simple change to the requirements can drastically change the topology of the network by causing two objects in logical space that were notionally far away to be adjoining, just as it can completely recast the structure of the requirements.

The statements in the logical network can be seen as constraints - that is, after all, what requirements are. At an early stage, there is no sense of solving the constraints, but there is a sense of reasoning about them. The statements are more than just constraints, they form a web of logical control over what is being described. Statements can also add two numbers together to get a third, that is, they are extensible into areas that are not constraints. If we attempt to maintain consistency of specification with a paper-based approach, we rapidly get inconsistencies creeping in, someone or something else having to attend to all the connections.

Numbers in the network can be represented as singular values or ranges, either of integers or reals, and these ranges can be propagated through the structure and used for calculation or logical reasoning.

Time and money can be represented in a flexible way, using ranges and logical control of variability. Alternatives and contingencies fit easily into the network structure, just as they should in any well thought out set of requirements. [1].

Previous papers (see Technical Discussion) have described various specie of network operators - logical structure, simple and complex analytic operators. Other operators in the network can store distributions and correlations between variables that have been found by reading databases resulting from scenario analysis. These operators go through a learn-store-output cycle, and respond to changing their probability control by changing their outputs from a range encompassing all alternatives, through to singular outputs.

Layering of Knowledge

Every field of knowledge has its specialties - whether medicine, aircraft design or physics. There are areas which are largely independent, but still need a few connections to other specific areas or to general facts. This problem of structuring knowledge in shells might be shown diagrammatically as

There are conduits at interfaces, the surfaces of these shells of knowledge, with still a need to penetrate the shell for a more detailed connection, one that had not been thought of when the boundaries of the particular knowledge were established. The boundaries keep changing as the knowledge contained in the shells changes - too many intrusive connections and the boundaries need to be re-established to minimise intrusion on the specialist knowledge, but intrusions there will be. For a detailed specification of a complex entity, fifty levels of knowledge shelling can be easily reached.

The environments in a logical network provide unlimited layering, while retaining the ability to connect in an undirected way across any boundary.

Conclusion

Requirements Elicitation demands flexibility of description.

The undirected logical network of Active Structure can provide support across the stages of Requirements Elicitation, high level design, development, manufacture. It can do this because it uses the structure of the problem, not some preconceived directed structure. The lack of directionality in its connections allows anything to be the current subgoal of its analysis. The undirectedness provides Consistent Reasoning, that is, maintenance of consistency, throughout the structure. The network propagates messages to operators which can change the topology of the network and show the results of decisions and new requirements at every stage of the planning process. It appears to be an appropriate support tool for any phase of high level planning.

The Next Step

Requirements Elicitation can be extended to the reading of text - this allows people to describe, in a flexible way, exactly it is that they require. The machine converts what it reads into a form capable of extension by other text, and it becomes capable of checking for validity among several descriptions at different levels of granularity - see The “Swing Space” – Understanding the meaning of a complex paragraph (the example is for property leases, the principle is universal). There is a cost for preparing the machine to read in a particular domain, but where the cost of what is described is in the many millions or billions, this cost is trivial.