We study the roles and properties of decomposition methods in large-scale stochastic decision-making systems, which are usually computation-demanding to solve or hard to analyze directly. By breaking down the complex original problem into smaller sub-parts, we may be able to significantly reduce the computing time, design simple algorithms that leverage complex network structures, or simplify the computation modules required in building the system. Specifically, we will elaborate on three different types of decomposition ideas and investigate the tradeoff between the improvement of efficiency and the potential degradation of performance. 1. Decomposing sequential decision-making systems along the time horizon. Solving a finite-horizon sequential decision-making problem can be very costly when the horizon is large, as the dependency of run-time on the horizon is usually linear or super-linear. By temporally dividing an original sequential problem into several sub-problems, we will be able to solve each sub-problem in parallel, which will largely reduce the computing time. In general, studying each sub-problem separately will inevitably lead to a sub-optimal solution for the overall original problem. We will show that the performance loss incurred by this type of decomposition is usually small for large-scale Markov Decision Processes with sufficiently long horizons. 2. Decomposing the structure of hierarchical causal networks. Stochastic decision-making problems based on complex network structures are usually challenging to analytically study. However, by decomposing the original problem into sub-problems with simpler structure, insights can be gained towards how to solve the general problems. For instance, it is usually not difficult to design a simple algorithm for each sub-problem by taking advantage of its structure. However, combining the results of the sub-problems to form a good solution for the original problem is not trivial, as it requires a comprehensive method that fully leverages the complex dependency relations among the sub-structures in the network. We will focus on a sequential decision-making problem based on a causal inference network, and study an algorithm devised with the above decomposition idea. Performance bounds will be introduced and proved for this algorithm. 3. Decomposing function blocks in distributed information processing systems. In the previous two categories of problems, the methods adopted to break down a large original problem, either by dividing it along the temporal horizon, or by decomposing it according to its network structure, seem natural for the corresponding setups. The main challenge in these problems is to analyze the performance once we fix the decomposition method. However, in some stochastic systems, it is not trivial to find how the complex original problem can be decomposed. In a distributed information processing system, for example, a local sensor may need to conduct a series of actions including receiving, estimating, and compressing the signal, and transmitting the data to centralized processing modules. This system will become very costly as the number of local processors increases, because each processor needs to include a complex function block that completes all above actions on site. To reduce the expenses for building large distributed systems, we study an unconventional design enabled by a decomposition technique. Instead of decomposing the overall information processing system, we focus on the function blocks in the system. By decomposing the function block in the local processors, we will have more flexibility in the system design. More specifically, we will rearrange the order of the decomposed function blocks, which enables us to migrate part of the functions to the central unit. In this way, we may adopt much simpler local units, thus reducing the cost of the system. In this report, we will analyze the performance of this new design based on the decomposition and rearrangement of the function modules, and provide performance bounds for it