Total Questions : 50
Expected Time : 50 Minutes

1. What is the purpose of a parallel reduction operation in parallel computing?

2. Examine the impact of Amdahl's Law on the scalability of parallel computing systems, especially in the presence of communication overhead.

3. Explain how task parallelism differs from data parallelism in the context of parallel computing, and provide examples of when each is more suitable.

4. Examine the impact of task granularity on load balancing in parallel computing, discussing strategies for achieving optimal distribution of computational tasks.

5. Which parallel algorithm is commonly used for sorting large datasets?

6. Describe the challenges and solutions associated with achieving fault tolerance in large-scale parallel computing systems, with a focus on real-world applications.

7. What is the primary goal of parallel computing?

8. Which type of parallelism involves dividing a task into smaller, independent subtasks that can be performed concurrently?

9. Discuss the concept of parallel computing granularity and its impact on scalability and efficiency, providing examples from real-world applications.

10. Why is load balancing important in parallel computing?

11. What is the significance of parallel computing in real-time systems?

12. In parallel computing, what does load balancing refer to?

13. Discuss the challenges and advantages of achieving load balancing in complex parallel computing environments.

14. What is the purpose of parallel for-loops in parallel programming?

15. Which parallel algorithm is commonly used for searching a key in a large dataset?

16. What is a race condition in parallel programming?

17. What is parallel computing?

18. Define a parallel algorithm.

19. Discuss the challenges and solutions associated with mitigating race conditions in parallel programming.

20. Explain the concept of speculative execution in the context of high-performance parallel computing.

21. Discuss the concept of deadlock in parallel computing and provide strategies for preventing and resolving deadlock situations.

22. Explain Amdahl's Law and its significance in parallel computing.

23. Describe the difference between task parallelism and data parallelism in parallel computing.

24. Evaluate the role of parallel computing schedulers in managing complex parallel processes and optimizing overall system performance.

25. What is a race condition in parallel computing?

26. Which parallel computing paradigm focuses on dividing a problem into smaller, identical tasks that can be solved independently?

27. What is parallel processing?

28. What is the purpose of a barrier synchronization in parallel programming?

29. Which programming language is commonly used for parallel computing?

30. What is the primary advantage of parallel computing in scientific simulations?

31. What is Amdahl's Law used for in parallel computing?

32. What is the significance of parallel for-loops in parallel programming, and how do they contribute to optimizing computational speed?

33. What is the primary challenge of achieving load balancing in parallel computing?

34. Which parallel computing model involves a single control unit managing multiple processors?

35. In parallel computing, what is speculative execution?

36. What are the main challenges in designing efficient parallel algorithms?

37. Discuss the concept of SIMD (Single Instruction, Multiple Data) and its applications in parallel computing.

38. Evaluate the impact of Amdahl's Law on the design and scalability of parallel algorithms, considering different scenarios and levels of parallelization.

39. Which parallel computing model is based on the idea of breaking a task into subtasks that can be processed independently?

40. In parallel computing, what does Amdahl's Law express?

41. Describe the challenges and advantages of achieving load balancing in complex parallel computing environments.

42. Examine the role of parallel computing in optimizing scientific simulations, highlighting specific scenarios where it provides significant advantages.

43. In the context of parallel algorithms, what is meant by granularity and how does it impact performance?

44. Which parallel programming concept involves dividing a program into small, independent threads of execution?

45. Explain the concept of granularity and its impact on performance, with a focus on achieving optimal performance in parallel algorithms.

46. In parallel computing, what does granularity refer to?

47. What is the purpose of parallel prefix sum in parallel algorithms?

48. What is the role of a parallel computing cache?

49. What role does the parallel computing cache play in optimizing the performance of parallel algorithms?

50. In parallel computing, what is a deadlock?