added here from my post in the forum..
I had a chat to Jason about this a week ago.. essentially…
after component pickup, Open PNP motion thread tells motion-controller (external) to go to the target location (PCB place)
motion thread blocks on image processing thread.
image processing thread is also blocked at this time.
motion controller takes care of getting the UP camera in the right place and takes the exposure.
motion controller then sends signal to PC (say a char on a serial port) and this unblocks the image thread
image thread now unblocked processes image, and comes up with a new offset for the placement and rotation.
Image thread posts this new info to the motion thread and blocks-
motion thread unblocks and updates the new target positon to the motion controller , and also updates rotation.
motion thread blocks waiting for ‘on station’ signal to come from the motion controller, and the placement ensues.
of course there various things that might happen different if we are placing a big component where we most likely need to STOP over the camera and have afew goes at rotation, putting the part down, picking it up again, rotating etc to get that right- but that’s not often (for hard parts) .
and also if people have a camera that requires everything to be stationary, slightly different but similar events occur.
However my idea is basically to decouple the motion and image processing so that the machine can be on its way while the images are being worked on..
of course if the image was bad, or the uncertainly level by the image processor was low, then it is going to have to tell the motion thread to send the head back to where the camera is (or sent the camera to it or something)
Of course this having to have another look at the component and moving the head back to the camera etc is a great advantage of flipping a mirror in the way and looking at the component that way.. I just have not yet found a good way of doing that for multiple nozzles. More thought required.