Submitted by Sebastian on Sun, 11/02/2014 - 23:40
We've just finished the schematic and layout for the AXIOM Beta 4K Super35 (CMV12000) image sensor board. Yay!
The design features a few new ideas we gathered during the crowdfunding campaign from input and feedback to the existing design.
Those are as follows:
The sensor board contains an EEPROM to store sensor model, voltages and unique identifiers.
The sensor board can be rotated by 180 degree and still work with the same interface board (auto discovery to detect orientation).
The sensor board specifies and verifies what sensor voltages are desired and required (up to six voltages can be requested).
There are no parts on the top of the PCB except for the sensor/sensor socket to facilitate any kind of cooling system.
There is a cut-out area in the middle to connect to a heat sink or to measure sensor temperature with a thermopile.
Besides that, the designed interface should be generic enough to accommodate a number of other sensors without any changes to the interface board connections. This is an important step in AXIOM Beta design towards the modularity we are aiming for with the AXIOM Gamma. If you are experienced with schematics and/or board layout, please take the time to review the design and notify us about any issues you find. Note that this is a preliminary design and needs to be verified before we can actually build it.
Schematic:
axiom_beta_sensor_cmv12000_v0.9.pdf
Layer images:
axiom_beta_sensor_cmv12000_v0.9_all.png
axiom_beta_sensor_cmv12000_v0.9_top.png
axiom_beta_sensor_cmv12000_v0.9_sig1.png
axiom_beta_sensor_cmv12000_v0.9_sig2.png
axiom_beta_sensor_cmv12000_v0.9_bottom.png
Eagle files:
axiom_beta_sensor_cmv12000_v0.9.sch
axiom_beta_sensor_cmv12000_v0.9.brd
Our perk fulfilment team has been extra busy to contact all backers about their preferred button design choice, their t-shirt sizes or to notify backers who did not add the shipping cost to their pledges so far. The buttons are already in production and should arrive in our office soon.
We started creating an online simulation of the remote controller menu structure so we can evaluate the best way how these devices can be operated intuitively. The simulator is still an early draft but the basic foundation is now done and future extensions can be added easily.
If you want to help us feel free to grab the source code and make changes (send us a pull request when done): https://github.com/apertus-open-source-cinema/dictator/tree/master/HTML-Simulator
In 2011 when we first announced the project to build a remote control device for the Elphel camera at that time we called it “Dictator”. We referred to the “benevolent dictator” like Linus Torvalds who is typically the initial leader of an open source community project but we did acknowledge that most people had other interpretations of much less popular persons of interest in history. We are still looking for a name, make suggestions here: http://lab.apertus.org/V3
13 Comments
Thanks for your open update!
Thanks for your open update! Great first steps!
3 questions:
1. Hole: It looks like it isn't fitting exactly the sensor. Is that on purpose ? The hole should probably be bigger than the sensor, to avoid temperature-differences on the sensor and also airflow reaching all parts in case there is no dedicated cooling ?
2. Grounding planes/shielding: It looks like top and bottom planes are running signals too. Wouldn't it be better to have those running as much as possible only gnd and vcc and burry signal traces into the middle layers ?
3. The grounding planes in the inside layers have some strange shapes, what are the reasons for those ? Maybe part of the recommended layout of the chip-maker ?
Keep the great work and open-communication going!
1) Yes, on purpose. With a
1) Yes, on purpose. With a hole bigger than the sensor, no electrical connection would be possible :) Note that a metal plate can be placed right below the sensor which can cover the entire top of the sensor board.
2) There are pros and cons to burying the signals on the inner layers and there are differences in the layer thickness as well, we will probably do both designs (swapping inner and outer layers is easy) and simply compare performance.
3) The inner layers are mixed ground and supply layers, which is due to us restricting ourselves to only four layers for now. This would unfold and straighten with six layers.
Since the sensor can accept
Since the sensor can accept multiple voltages it would be great if it could work along the principles of the Canon C500 (all Canon cinema cameras?). That is where the the voltage to the sensor is varied according to the desired ISO. From what I understand this means that the middle grey point remains always in the middle of the latitude scale so you always have the same amount of stops over and under. This is opposed to the Arri and Red system where the mid point slides up and down with the ISO change giving you less latitude one way or the other.
That is why we designed the
That is why we designed the voltage/power interface to be flexible and only check for over/under voltage situations of a given range and allow the Beta Board to fine tune those voltages for various reasons (power consumption, quality, etc).
Thank you for sharing the
Thank you for sharing the progress status of the project.
I would like to help on building the user interface design of the remote.
What i have to say facing these facts that the developpement must reach its higher frequency and try to summarize like you just did in order to avoid getting away from the project targets.
Good luck for what's coming up next folks !
Help would be greatly
Help would be greatly appreciated! Please grab the Javascript/jquery code from the mentioned Github link and let me know once you made pogress (IRC, email, github, etc.).
Ok great, i will try to send
Ok great, i will try to send you few "maquettes" (suggestions) by email, hopefully Javacscript is my cup of tea :)
I will try to make beautiful non complecated interfaces, something intuitive like in Apple's marketing bullshit :)
Hope hearing from u guys soon.
PS : the thing that can be a real motive is that we are helping to create our future camera how awsome ! This is going to be legen - wait for it - dary :)
Great to see the progress so
Great to see the progress so far.
I know development of the remote is still just starting. If its not planed already, I think it would be great to be able to operate it with one hand in addition to its current setup. for example, using either the up/down left/right buttons to cycle through all menus and the ok button to change a setting in each column. Or using the iso/shutter/wb button to cycle through the settings in its own column respectively.
Keep up the good work!
Regarding the remote control.
Regarding the remote control. The name, of course, should be a recursive acronym,
but I have a shortage of those at the moment. Perhaps a literate person can deliver one.
Could we see some user time and space optimization related to UI?
We see here some buttons performing a single function. This is a crude
and inefficient start to the design. This is not a crane. Time is money.
I have used several cameras with horrible to very nice UI.
The best use a J-mouse or joystick type control button or
a dial wheel which can also be pressed for a selection. Nice to use.
These are very compact and efficient, and allow rapid scrolling in menus.
The worst use the five button array you see placed on the right side of your
simulator drawing. I have no idea what menus you plan to put in the controller,
but if the UI is slow, you are wasting the user's time and causing extra motion. BAD.
Think about the control density and speed of the UI and switches.
The Panasonic HVX200a uses the five button design seen here to navigate menus
and it's a horrible, slow pain in the ass. Terrible design. UI sucks. I have an obsolete
Canon Zr50 consumer digicam which has one of the best and most efficient UI's, using
a scroll and select wheel type switch. Best I've seen in terms of space, maintenance, maybe
cost as well. The J-mouse type button on the AF100 is pretty efficient as well. Seen on several cameras.
My simplest observation is that the control density is low, and human UI will be slow and repetitious.
Could you please consider getting input from the users about the best and worst UI's
currently in use and simply pirate the best designs? I'm sure this is one of the easier
tasks for the beta project, and solutions are available all around us. Look in your junk box.
My 2p.
Could it make sense for the
Could it make sense for the remote to be an ergonomic plug in removable handgrip?
It could operate as the principal control interface whilst attached but unplug/clip for use when remote camera control function is required.
Power?
In my opinion the remote
In my opinion the remote control should fit the rear of the camera.
Thanks for the update. I've
Thanks for the update. I've been meaning to get into contact via IRC but I simply don't have time to get my head round it all. I honestly think a forum of some sort would be much easier to deal with as you can have a series of threads dedicated to each part, rather than a series of overlapping conversations which can be quite hard to follow.
Anyway. My biggest concern/wish is to see some sort of high speed frame buffer included in the sensor design. The biggest selling point of the CMV2000 chip for my self (and I suspect many who have opted for the Beta) is the speed of the chip and it's definitely something we would want to be able to make use of for shooting high speed video. Obviously there is likely to be some restrictions on constant play out at high speed (it looks like the Convergent Design Odyssey 7Q, for example, cannot handle data rates much beyond the Sony FS700 4K Raw at 60fps or 2K Raw at 240fps) I doubt the Beta will be able to do much beyond 30fps in HDMI or SDI.
However, many high speed cameras (the Phantom for example, but also the Sony FS700) don't produce constant layout at high speed. Instead raw frames are recorded into a high speed buffer then, once the buffer is full of frames (shot at high speed) these are played out from the buffer at normal speed for recording - the amount of 'real time' footage that can be recorded, then, is dependent upon the speed of shooting and the size of the frame buffer (e.g. shooting at 1000fps fills the frame buffer with frames twice as fast as shooting at 500fps)
It seems to me that this could work really well with the Beta - so long as there is room in the design for a frame buffer to be added to the image pipeline - after the ADC and before the debayering. As an added bonus the buffer could also be used to record a number of 4K frames that can be processed for slower playout at full resolution, should we discover the architecture cannot handle real-time playout at 4K.
It would also be good if the frame buffer could be easily upgraded (e.g. a bigger frame buffer giving more time at high speeds). seriously, I think if this one feature could be added to the camera it would sell really really well once it goes into production. Currently there simply isn't a low cost/high quality alternative to the £50k+ Phantom cameras (I know there's the kickstarter thing, but the resolution on the camera is really poor) and something like that would be a killer feature.
Thanks,
Colin.
The distance between mounting
The distance between mounting holes seems to be 49.53mm (converted from imperial). I do realize it is more practical to lay PCB traces in mil pitch but the mounting holes are better off being exactly 50mm apart..
Add new comment