Question 1
Why do the steps for taking a new output (product/service) to the market not usually happen in a clean, smooth sequence?
Student-friendly explanation
In practice, “new output to market” work behaves more like a cycle with feedback than a straight line. Even if we list the stages (idea/product selection, design, process planning, economic analysis, pilot/validation, capacity decisions, launch, etc.), real decisions keep getting revised as new information appears.
Main reasons the sequence becomes non-linear
- Feedback loops are unavoidable: Later learning (from design trials, market inputs, cost estimates, or process constraints) often forces changes to earlier decisions, so teams loop back instead of moving only forward.
- Economic analysis may not wait for “perfect” early inputs: The materials explain that cost–economics work can sometimes be performed after substantial development work has already begun, because better cost/market data emerges as the design becomes clearer. This naturally disrupts a smooth sequence.
- Features and improvements can enter at multiple points: New ideas (or new customer/market requirements) can add or modify features at almost any stage, effectively starting a fresh cycle of development rather than continuing the old sequence unchanged.
- Process design may run in parallel with product design: If market success depends heavily on low cost or on building large capacity quickly, process decisions may need to be made alongside product development, not after it.
- Product selection is an ongoing activity: The “selection” and “development” streams can overlap; new options can enter the pipeline while earlier ones are still being evaluated or developed.
Practical takeaway
A realistic way to manage this is to expect iteration and overlap, and to plan control points where feedback is reviewed and decisions are updated, rather than assuming a one-pass sequence.
Question 2
What information is required for project crashing? Illustrate the information items using a familiar project structure.
Meaning of project crashing (in simple terms)
Project crashing is the deliberate reduction of project duration by reducing activity times through additional resources, which usually increases direct activity cost. The goal is often to find a better time–cost balance (including indirect costs that reduce when the project finishes earlier).
Information needed for crashing (what you must know before you can crash)
- Project network data: a clear list of activities and the precedence relationships (which activities must finish before others can start).
- Activity time data: for each activity, its normal time and its crash time (minimum feasible time).
- Activity cost data: for each activity, its normal cost and its crash cost.
- Cost of time reduction (crash cost rate / “slope”): the additional cost per unit time saved for each activity, commonly computed as:
$$ \text{Cost per unit time saved} \;=\; \frac{\text{Crash Cost} – \text{Normal Cost}}{\text{Normal Time} – \text{Crash Time}} $$
- Critical path(s) and critical activities: you must compute ES(j), EF(j), LS(j), LF(j), and slack to identify activities with zero slack (critical). Only reducing time on critical activities can reduce overall project completion time.
- Indirect project cost per unit time: overheads and time-related losses/benefits tied to project duration (these typically decrease as duration decreases).
- Feasibility limits and resource constraints: confirmation that extra resources can actually be arranged, and that no activity is pushed below crash time; also recognize that limited resources can restrict what can be done simultaneously.
How the information is used (crashing logic)
- Crash only critical path work: reducing time on a non-critical activity does not shorten project duration.
- Choose the least-cost option first: select the critical activity (or one activity on each critical path, if multiple critical paths exist) with the lowest cost per unit time saved.
- Recompute after each crash step: after crashing, recompute ES/LS and re-identify critical paths, because the critical path can change as durations change.
- Stop when further crashing is impossible: if on at least one critical path none of the activities can be crashed further (already at crash time), the project cannot be shortened further.
Illustration using a familiar “template” project data set (what the data typically looks like)
The course material’s example of crashing uses exactly the kind of information you would collect: (i) activities and predecessors, (ii) normal time and crash time for each activity, (iii) normal cost and crash cost for each activity, and (iv) an indirect project cost rate per week. With those items, the example computes the cost per week saved for each activity and then crashes step-by-step while checking how the critical path changes.
Question 3
“A larger sample discriminates better between good and bad lots.” Critically examine this statement in the context of acceptance sampling.
What the statement is pointing to
In acceptance sampling, we decide whether to accept or reject a lot based on a sample. The key behavior is captured by the Operating Characteristic (OC) curve, which relates the probability of acceptance (Pa) to the actual lot quality (fraction defective).
Why the statement is generally true
- More information, less randomness: a larger sample reduces sampling variability, so the decision is less “accidental.”
- OC curve becomes steeper: with a larger sample size (keeping the acceptance rule comparable), the OC curve tends to separate good-quality lots from poor-quality lots more sharply, meaning better discrimination.
- Risks can be controlled more tightly: larger samples can help reduce producer’s risk (α) and consumer’s risk (β) for given quality points such as AQL and LTPD, because the plan can be designed to make acceptance/rejection probabilities closer to the intended levels.
Why the statement needs a “critical” qualifier
- Higher inspection cost and time: increasing sample size increases inspection effort; this may be impractical in high-volume operations or when quick decisions are needed.
- Destructive or costly testing: if testing destroys the item (or is expensive), a larger sample can be economically unacceptable.
- Diminishing returns: beyond a point, increasing sample size yields only small improvements in discrimination compared to the added cost.
- Discrimination depends on the plan, not only on sample size: acceptance number and plan design (single, double, sequential sampling) also influence the OC curve and the risk levels.
Balanced conclusion
The statement is directionally correct: larger samples typically provide better discrimination via a steeper OC curve. However, acceptance sampling is always a compromise among discrimination power, inspection economics, and the intended levels of α and β. A “best” sample size is therefore not the largest possible one, but the one that meets risk and cost requirements for the context.
Question 4
Differentiate wastivity and productivity. Are “reducing wastivity” and “increasing productivity” essentially the same?
Definitions (clear distinction)
- Productivity: the ratio of output to input.
- Wastivity: the ratio of waste to input.
$$ \text{Productivity } (P) = \frac{O}{I} \qquad\qquad \text{Wastivity } (W_s) = \frac{W}{I} $$
Relationship between the two
If we treat input as being used either to produce output or to create waste, then:
$$ I = O + W \;\;\Rightarrow\;\; \frac{O}{I} = 1 – \frac{W}{I} $$
So, under this framing, productivity and wastivity are directly connected:
$$ P = 1 – W_s $$
So are they “one and the same”?
- Conceptually, they are tightly linked: when waste is defined and measured consistently, reducing wastivity will increase productivity because waste is treated as the “non-output” part of input.
- Managerially, the actions can look different: productivity can be improved by (i) increasing output for the same input, (ii) reducing input for the same output, and (iii) reducing wastefulness (wastivity). The wastivity lens is a specific improvement route that focuses on eliminating non-productive losses.
Question 5
Write short notes on any three of the following topics. (Chosen: Locational break-even analysis, ABC analysis, Critical Path Method.)
(a) Locational break-even analysis
Purpose: This method compares alternative facility locations by examining how total cost (and sometimes profit) changes with volume, using fixed and variable cost estimates for each location.
Core idea: For each location, total cost is expressed as a straight-line function of volume (Q).
$$ TC = F + vQ $$
- F = fixed cost at the location
- v = variable cost per unit
- Q = expected volume
Decision rule: Plot (or compute) total cost lines for locations; the best location at a given Q is the one with the lowest total cost. Intersection points indicate the volume ranges where each location dominates.
Limitations: It is cost-volume focused and may not capture qualitative factors (infrastructure, risk, service level, etc.) unless those are handled separately.
(d) ABC analysis
Purpose: ABC analysis is a selective inventory control technique that classifies inventory items by their annual consumption value (annual usage quantity × unit cost) so that managerial attention is concentrated where it matters most.
Typical pattern (as presented):
- A items: small number of items, very high share of annual usage value (tight control needed).
- B items: moderate number and moderate value share (normal control).
- C items: large number of items, low value share (simple controls are sufficient).
How it is carried out: compute annual usage value item-wise, rank items in descending order, compute cumulative percentages, and set breakpoints for A, B, and C classes.
Management implication: apply strict review and accurate records to A items, periodic controls for B, and simplified methods for C (since the cost of control should not exceed the value protected).
(e) Critical Path Method (CPM)
Purpose: CPM is a project planning and control technique used when activity times are assumed deterministic (known with certainty). It identifies critical activities and computes timing information that supports scheduling and control.
Key outputs:
- ES(j), EF(j): earliest start and earliest finish times
- LS(j), LF(j): latest start and latest finish times
- Slack/float: allowable delay without delaying project completion
- Critical path: the longest-duration path; activities on it are critical (zero slack), so any delay delays the entire project.
Why it is useful: CPM focuses management attention on the activities that can delay completion and uses slack information to prioritize resources and monitoring.
These solutions have been prepared and corrected by subject experts using the prescribed IGNOU study material for this course code to support your practice and revision in the IGNOU answer format.
Use them for learning support only, and always verify the final answers and guidelines with the official IGNOU study material and the latest updates from IGNOU’s official sources.