Home
MCQS
Cloud Computing MCQ Quiz Hub
High Performance Computing Set 5
Choose a topic to test your knowledge and improve your Cloud Computing skills
1. In All-to-all Broadcast on a Mesh, operation performs in which sequence?
rowwise, columnwise
columnwise, rowwise
columnwise, columnwise
rowwise, rowwise
2. Messages get smaller inand stay constant in .
gather, broadcast
scatter , broadcast
scatter, gather
broadcast, gather
3. The time taken by all-to- all broadcast on a ring is.
t= (ts + twm)(p-1)
t= ts logp + twm(p-1)
t= 2ts(√p – 1) - twm(p-1)
t= 2ts(√p – 1) + twm(p-1)
4. The time taken by all-to- all broadcast on a mesh is.
t= (ts + twm)(p-1)
t= ts logp + twm(p-1)
t= 2ts(√p – 1) - twm(p-1)
t= 2ts(√p – 1) + twm(p-1)
5. The time taken by all-to- all broadcast on a hypercube is .
t= (ts + twm)(p-1)
t= ts logp + twm(p-1)
t= 2ts(√p – 1) - twm(p-1)
t= 2ts(√p – 1) + twm(p-1)
6. The prefix-sum operation can be implemented using the kernel
all-to-all broadcast
one-to-all broadcast
all-to-one broadcast
all-to-all reduction
7. Select the parameters on which the parallel runtime of a program depends.
number of processors
communication parameters of the machine
all of the above
input size
8. The time that elapses from the moment the first processor starts to the moment the last processor finishes execution is called as .
parallel runtime
overhead runtime
excess runtime
serial runtime
9. Select how the overhead function (To) is calculated.
to = p*n tp - ts
to = p tp - ts
to = tp - pts
to = tp - ts
10. What is is the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements?
overall time
speedup
scaleup
efficiency
11. Which is alternative options for latency hiding?
increase cpu frequency
multithreading
increase bandwidth
increase memory
12. __ Communication model is generally seen in tightly coupled system.
message passing
shared-address space
client-server
distributed network
13. The principal parameters that determine the communication latency are as follows
startup time (ts) per-hop time (th) per-word transfer time (tw)
startup time (ts) per-word transfer time (tw)
startup time (ts) per-hop time (th)
startup time (ts) message-packet-size(w)
14. The number and size of tasks into which a problem is decomposed determines the __
granularity
task
dependency graph
decomposition
15. Average Degree of Concurrency is...
the average number of tasks that can run concurrently over the entire duration of execution of the process.
the average time that can run concurrently over the entire duration of execution of the process.
the average in degree of task dependency graph.
the average out degree of task dependency graph.
16. Which task decomposition technique is suitable for the 15-puzzle problem?
data decomposition
exploratory decomposition
speculative decomposition
recursive decomposition
17. Which of the following method is used to avoid Interaction Overheads?
maximizing data locality
minimizing data locality
increase memory size
None of the above
18. Which of the following is not parallel algorithm model
the data parallel model
the work pool model
the task graph model
the speculative model
19. Nvidia GPU based on following architecture
mimd
simd
sisd
misd
20. What is Critical Path?
the length of the longest path in a task dependency graph is called the critical path length.
. the length of the smallest path in a task dependency graph is called the critical path length.
path with loop
none of the mentioned.
21. Which decompositioin technique uses divide-andconquer strategy?
recursive decomposition
sdata decomposition
exploratory decomposition
speculative decomposition
22. Consider Hypercube topology with 8 nodes then how many message passing cycles will require in all to all broadcast operation?
the longest path between any pair of finish nodes.
the longest directed path between any pair of start & finish node.
the shortest path between any pair of finish nodes.
the number of maximum nodes level in graph.
23. Scatter is _________
one to all broadcast communication
all to all broadcast communication
one to all personalised communication
None of the above
24. If there is 4X4 Mesh Topology ______ message passing cycles will require complete all to all reduction.
4
6
8
10
25. Following issue(s) is/are the true about sorting techniques with parallel computing.
large sequence is the issue
where to store output sequence is the issue
small sequence is the issue
None of the above
26. Partitioning on series done after ______________
local arrangement
processess assignments
global arrangement
none of the above
27. In Parallel DFS processes has following roles.(Select multiple choices if applicable)
donor
active
idle
passive
28. Suppose there are 16 elements in a series then how many phases will be required to sort the series using parallel odd-even bubble sort?
8
4
5
15
29. Which are different sources of Overheads in Parallel Programs?
interprocess interactions
process idling
all mentioned options
excess computation
30. The ratio of the time taken to solve a problem on a parallel processors to the time required to solve the same problem on a single processor with p identical processing elements.
the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements.
the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements
the ratio of number of multiple processors to size of data
None of the above
31. CUDA helps do execute code in parallel mode using __________
cpu
gpu
rom
cash memory
32. In thread-function execution scenario thread is a ___________
work
worker
task
none of the above
33. In GPU Following statements are true
grid contains block
block contains threads
all the mentioned options
sm stands for streaming multiprocessor
34. Computer system of a parallel computer is capable of_____________
decentralized computing
parallel computing
centralized computing
All of the these
35. In which application system Distributed systems can run well?
hpc
distrubuted framework
hrc
None of the above
36. A pipeline is like .................... ?
an automobile assembly line
house pipeline
both a and b
a gas line
37. Pipeline implements ?
fetch instruction
decode instruction
fetch operand
all of above
38. A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______ ?
super-scaling
pipe-lining
parallel computation
none of these
39. VLIW stands for ?
very long instruction word
very long instruction width
very large instruction word
very long instruction width
40. Which one is not a limitation of a distributed memory parallel system?
higher communication time
cache coherency
synchronization overheads
None of the above
41. Which of these steps can create conflict among the processors?
synchronized computation of local variables
concurrent write
concurrent read
None of the above
42. Which one is not a characteristic of NUMA multiprocessors?
it allows shared memory computing
memory units are placed in physically different location
all memory units are mapped to one common virtual global memory
processors access their independent local memories
43. Which of these is not a source of overhead in parallel computing?
non-uniform load distribution
less local memory requirement in distributed computing
synchronization among threads in shared memory computing
None of the above
44. Systems that do not have parallel processing capabilities are?
sisd
simd
mimd
All of the above
45. Parallel processing may occur?
in the instruction stream
in the data stream
both[a] and [b]
None of the above
46. To which class of systems does the von Neumann computer belong?
simd (single instruction multiple data)
mimd (multiple instruction multiple data)
misd (multiple instruction single data)
sisd (single instruction single data)
47. Fine-grain threading is considered as a ______ threading?
instruction-level
loop level
task-level
function-level
48. Multiprocessor is systems with multiple CPUs, which are capable of independently executing different tasks in parallel. In this category every processor and memory module has similar access time?
uma
microprocessor
multiprocessor
numa
49. For inter processor communication the miss arises are called?
hit rate
coherence misses
comitt misses
parallel processing
50. NUMA architecture uses _______in design?
cache
shared memory
message passing
distributed memory
51. A multiprocessor machine which is capable of executing multiple instructions on multiple data sets?
sisd
simd
mimd
miss
52. In message passing, send and receive message between?
task or processes
task and execution
processor and instruction
instruction and decode
53. The First step in developing a parallel algorithm is_________?
to decompose the problem into tasks that can be executed concurrently
execute directly
execute indirectly
none of above
54. The number of tasks into which a problem is decomposed determines its?
granularity
priority
modernity
None of the above
55. The length of the longest path in a task dependency graph is called?
the critical path length
the critical data length
the critical bit length
none of above
56. The graph of tasks (nodes) and their interactions/data exchange (edges)?
is referred to as a task interaction graph
is referred to as a task communication graph
is referred to as a task interface graph
None of the above
57. Mappings are determined by?
task dependency
task interaction graphs
both a and b
None of the above
58. Decomposition Techniques are?
recursive decomposition
data decomposition
exploratory decomposition
all of above
59. The Owner Computes Rule generally states that the process assigned a particular data item is responsible for?
all computation associated with it
only one computation
only two computation
only occasionally computation
60. A simple application of exploratory decomposition is_?
the solution to a 15 puzzle
the solution to 20 puzzle
the solution to any puzzle
None of the above
61. Speculative Decomposition consist of _?
conservative approaches
optimistic approaches
both a and b
only b
62. task characteristics include?
task generation.
task sizes.
size of data associated with tasks.
All of the above
63. Writing parallel programs is referred to as?
parallel computation
parallel processes
parallel development
parallel programming
64. Parallel Algorithm Models?
data parallel model
bit model
data model
network model
65. The number and size of tasks into which a problem is decomposed determines the?
fine-granularity
coarse-granularity
sub task
granularity
66. A feature of a task-dependency graph that determines the average degree of concurrency for a given granularity is its ___________ path?
critical
easy
difficult
ambiguous
67. The pattern of___________ among tasks is captured by what is known as a task-interaction graph?
interaction
communication
optmization
flow
68. Interaction overheads can be minimized by____?
A. maximize data locality
maximize volume of data exchange
increase bandwidth
minimize social media contents
69. Type of parallelism that is naturally expressed by independent tasks in a task-dependency graph is called _______ parallelism?
task
instruction
data
program
70. Speed up is defined as a ratio of?
s=ts/tp
s= tp/ts
ts=s/tp
tp=s /ts
71. Parallel computing means to divide the job into several __________?
bit
data
instruction
task
72. _____ is a method for inducing concurrency in problems that can be solved using the divide-and-conquer strategy?
exploratory decomposition
speculative decomposition
speculative decomposition
recursive decomposition
73. The___ time collectively spent by all the processing elements Tall = p TP?
total
average
mean
sum
74. The dual of one-to-all broadcast is ?
all-to-one reduction
all-to-one receiver
all-to-one sum
none of above
75. A hypercube has?
2d nodes
2d nodes
2n nodes
n nodes
76. The Prefix Sum Operation can be implemented using the ?
all-to-all broadcast kernel.
all-to-one broadcast kernel.
one-to-all broadcast kernel
scatter kernel
77. In the scatter operation ?
single node send a unique message of size m to every other node
single node send a same message of size m to every other node
single node send a unique message of size m to next node
none of above
78. The gather operation is exactly the inverse of the ?
scatter operation
broadcast operation
prefix sum
reduction operation
79. Parallel algorithms often require a single process to send identical data to all other processes or to a subset of them. This operation is known as _________?
one-to-all broadcast
all to one broadcast
one-to-all reduction
all to one reduction
80. In which of the following operation, a single node sends a unique message of size m to every other node?
gather
scatter
one to all personalized communication
both a and c
Submit