Multithreading – many threads or as few threads as possible?
As a side project, I'm writing a server for an old game I've played I try to make servers as loosely coupled as possible, but I want to know what multithreading is a good design decision At present, I have the following operation sequence:
>Start (create) – > > server (listen to client, create) – > > client (listen to command and send cycle data)
I assume that there are an average of 100 customers, because this is the maximum value of the game at any given time What is the right decision for the clue of the whole thing? My current settings are as follows:
Listen to 1 thread on the newly connected server, create a client object on the new connection and start listening again. > The client object has a thread that listens for incoming commands and sends periodic data This is done using a non blocking socket, so just check if data is available, process the data, and then send the queued message Complete the login before the start of the send receive cycle. > A thread (now) is the game itself, because I think it is separated from the whole client server part, in terms of architecture
This will result in a total of 102 threads I even consider giving the client two threads, one for sending and one for receiving If I do, I can use blocking I / O on the receiver thread, which means that the thread is mostly idle on average
My main concern is that by using this many threads, I will consume resources I don't worry about racial conditions or deadlock, because that's what I have to deal with
My design is such that I can use one thread for all client communication, whether 1 or 100 I separate the communication logic from the client object itself, so I can implement it without rewriting a lot of code
The main question is: is it wrong to use more than 200 threads in an application? Are there any advantages? I'm considering running this function on a multi-core machine. Do I need many advantages of multiple cores like this?
thank you!
Of all these threads, most are usually blocked I don't expect more than five connections per minute Customer commands rarely appear, averaging 20 minutes per minute
According to my answer here (context switching is the performance I'm considering, but I don't know until you point it out, thank you!) I think I will go to a listener, a receiver, a sender and some miscellaneous things; -)
Solution
I wrote on Net, I don't know my code is due to Net restrictions and API design, or this is a standard way of doing things, but how did I do such things in the past:
>The queue object that will be used to process incoming data This should be locked synchronously between queued and worker threads to avoid race conditions. > A worker thread used to process data in a queue The thread of the queue data queue uses semaphores to notify this thread to process items in the queue The thread starts automatically before any other thread and contains a continuous loop that can run until it receives a shutdown request The first instruction in the loop is a flag to pause / resume / terminate processing The flag is initially set to pause so that the thread is idle (instead of a continuous loop) without processing When there are items to be processed in the queue, the queued thread changes the flag The thread then processes a single item in the queue at each iteration of the loop When the queue is empty, it will set the flag to pause so that at the next iteration of the loop, it will wait until the queuing process notifies that there is more work to complete. > A connection listener thread that listens to incoming connection requests and passes them to the connection processing thread that... > creates a connection / session A separate thread from the connection listener thread means that you are reducing the potential of connection requests lost due to resource reduction, and that thread is processing requests. > The incoming data listener thread is used to listen for incoming data on the current connection All data is passed to the queued thread for processing Your listening thread should do as little outside of basic listening as possible and transfer the data out for processing. > The queued thread arranges the data in the correct order to process all the contents correctly. The thread raises the semaphore to the processing queue to know the data to be processed Separating threads from incoming data listeners means that you are unlikely to miss incoming data. > Some session objects passed between methods so that each user's session is self-contained in the whole thread model
This allows threads to be as simple but reliable as I've thought I wanted to find a simpler model than this, but I found that if I tried to reduce the threading model, I began to lose data in the network flow or miss connection requests
It also assists TDD (test - driven development) so that each thread is processing a single task and makes code testing easier Having hundreds of threads can quickly become a nightmare of resource allocation, while single thread becomes a nightmare of maintenance
It is much simpler to keep a thread for each logical task in the same way that each task has a method in TDD environment. You can logically separate each thing that should be done It's easier to find potential problems and solve them