Things like keyboard and signal mode by default will only last 5s
which is confusing, so now they default to run indefinitely
unless overidden with the -t option.
This patch replaces the hardcoded "pi" user in the dtoverlay-pre and
dtoverlay-post scripts with $SUDO_USER, which is set by the sudo command
to the ID of the user calling it. In the event that SUDO_USER isn't set,
$LOGNAME is used (set by login shells), with an ultime fallback to "pi".
This change is necessary for users who have changed the login ID, perhaps
as a security measure.
Add parameter to allow RIL components to report the buffer alignment
requirements, and the core will the do those alignments.
It would make more sense for the mmal_port_format_commit to fail
should the width/height not be suitably aligned, rather than just
amending the format and not telling userland.
IL didn't have the user buffer flags defined, so there were some
magic (1<<X) flags kicking around. Formalise those as defines.
image_fx was passing only 1<<30 and 1<<31 through. Pass all the
user buffer flags through instead.
This commit adds logging_messages begin pushed out to the UART output
it is enabled in one of three ways:
1) sed -i -e "s/BOOT_UART=0/BOOT_UART=1" bootcode.bin
2) Adding a file UART to the SD card
3) Adding uart_2ndstage=1 to config.txt
Using the sed will get you earlier debug than UART which will get
earlier than uart_2ndstage
Altering the logging level for vcos messages has been added as such:
vcos_logging_level=*:info,brfs:trace,confzilla_be:warn
This means everything will default to info level, except brfs which gets
trace and confzilla back end which gets warnings. The logging_level
still exists but only controls the vcfw logging levels.
Some of the buffer flags are designated as
MMAL_BUFFER_HEADER_VIDEO_FLAG_xx instead of MMAL_BUFFER_HEADER_FLAG_xx
and are intended to be in buffer->type->video.flags instead of
buffer->flags. The implementation put them in buffer->flags.
Populate both fields with these flags.
MMAL_BUFFER_HEADER_T has a TYPE_SPECIFIC fields, which for
video includes the number of planes, pitches and offsets.
This hasn't been populated previously, but it can be helpful
for interfacing to the likes of DRM and FFMPEG to be given
these values.
RIL doesn't support these values, so precompute them for
output ports based on the port format.
The upstream VCHIQ has been denied its cache-line-size property and
forces a cache line size of 32. This doesn't work on V7/8 cpus with
64-bit cache lines.
If an updated DT is found (based on the size encoded in the reg
property, which should be 0x3c not 0xf, and the vchiq node name
"mailbox@7e00b840"), assume the kernel is using the correct cache
line size. A corresponding kernel patch derives the correct value,
and updates the size as indicated.
Allow node names to be changed by assigning to the "name"
meta-property.
Don't create or extand a "reg" property, but do overwrite it if
it already exists.
The old "fragment@<n>" syntax causes newer versions of dtc to complain
about missing "reg" parameters, so there is a move to replace the '@'s
with '-'s. All that really matters is that the node names are distinct.
Modify the dtoverlay library to cope with either form. I don't think
it is necessary to cope with arbitrary fragment names, and keeping
this restriction makes handling disabled ("__dormant__") fragments
easier.
Adds option DISPMANX_FLAGS_ALPHA_DISCARD_LOWER_LAYERS to DispmanX
alpha flags that sets the current layer as effectively totally opaque
and full screen, therefore obscuring anything below and avoiding adding
it to the HVS display list.
Plumb it through IL/MMAL as well so that it can be used as a simple way
to hide the frame buffer from the display.
When inserting the time/date stamp into the filename was added
certain combinations of output filename no longer work. As it
was only searching for "%u" or "%d". Specifying %04d would take
the code down the timestamp route but with invalid parameters.
Make the search slightly more intelligent to allow specifying
the number of digits to insert in the filename.
If the alloc_size of a buffer is flagged as zero, then remove
the requirement for buffer->data to be non-NULL in
mmal_port_send_buffer.
This is initially an optimisation for the V4L2 codec support
where there is a need to send an EOS flag on an empty buffer,
so saves allocating the buffer which will never be used.
Raspivid: add an option to add H264 sps timings
H264 SPS headers can include a timings section which the Pi defaults to not including.
Add an option to raspivid to enable these timing parameters.
This patch also refactors setting some parameters which are all conditional on the codec being H264.
Some versions of GCC have --as-needed turned on by default.
The binding of libmmal_vc_client is such that the constructor
registers the supported components with the MMAL core, and nothing
calls into it directly. The linker can't tell this, decides
it is unused, and promptly drops it as a dependency - cue no
VideoCore components.
Adding --no-as-needed means that the linker leaves it alone.
It's not nice, but there doesn't appear to be a better solution.
See #178.
* RaspiStill: Apply gpsd info as EXIF tags
Applies GPS information from gpsd as EXIF tags.
Enable via "-gps" command line argument and requires libgps.so.22 when
enabled.
Only these GPS info are added as EXIF tags: GPSDateStamp, GPSTimeStamp,
GPSMeasureMode, GPSSatellites, GPSLatitude, GPSLatitudeRef,
GPSLongitude, GPSLongitudeRef, GPSAltitude, GPSAltitudeRef, GPSSpeed,
GPSSpeedRef, GPSTrack, GPSTrackRef.
There's a slightly quirky use case with deinterlacing.
Interlaced YUV420 is line interleaved, so chroma line 1
is that for luma lines 1&3, and chroma line 2 is for luma lines 2&4.
If you pass such a frame into a standard component (eg the ISP), it
messes up the chroma by assuming chroma line 1 is for luma lines 1&2.
If you pass it into the ISP as double width by half height then the
chroma subsampling behaves correctly. Normally that precludes using
a mmal_connection as that will copy the port format from output
to input. Setting this new flag skips that stage and makes it the
client's responsibilty to set both port formats appropriately.
Camera annotation can now support left and right justification of
the text, and setting an x and y offset.
Update the raspicam apps to allow taking these via the command
line.
The code was calling ilclient_get_output_buffer to retrieve a
filled buffer, and then immediately returning it via OMX_FillThisBuffer
before looking at the contents.
OMX_FillThisBuffer does not block, but relies on waiting for the
FillBufferDone callback instead.
The correct order of doing things is to call ilclient_get_output_buffer
(which waits on FillBufferDone if necessary), process the buffer,
and then call OMX_FillThisBuffer to pass it back to OMX for filling
again.
The recent improvements to video_encode mean that it is faster at
processing buffers and would often have reset the buffer header before
hello_encode had a chance to save the data.
It was using assumed knowledge of the padding instead.
The requirements on video_encode have recently been reduced
and the assumptions were therefore invalid and caused a segfault
The app was unconditionally enabling the video port on the camera
component, which meant that the component was waiting for buffers
that were never sent (that was conditional).
Move the port enable to within the same conditional.
In segmented mode, this allows you to specify that the file
name for each segment uses a time based filename, not a segment
number based one.
If the filename contains %d or %u a segment number is used.
Anything else, and the filename string is used as a formatting
string for the strftime function.
When in timelapse mode, the delay before the first shot
was the timelapse value. However, if a low value is set
this does not give enough time for the AE etc to settle.
In addition, the first image was only taken after the
first delay, rather than immediately which is what would
be expected. So a 5minute timelapse delay meant the 1st
image was only taken after 5 minutes.
The patch sets the delay for the first image capture to
be a constant value, 1000ms. So always time for the AE etc
to settle.
Fixes: #429, #473
glGetBufferParameteriv updates the client side cached copy of the
buffer_size. However, when it updated the cache it by calling
glxx_buffer_info_set it trashed the mapped_pointer and mapped_size
values because they are in the same structure as the cached_size but
this was being allocated on the stack.
The trampled mapped_pointer / mapped_size could cause crashes or invalid
buffer data to be uploaded.
Change to read the current cached value and then modify just the
cached_size preserving whatever was in the cache before.
Fixes#323
Signed-off-by: Tim Gover <tim.gover@raspberrypi.org>
glMapBufferOES fails if the usage hint passed to glBufferData was
GL_STREAM_DRAW because glBufferData does not initialise the cached copy
of the buffer size.
The usage hint doesn't make any functional difference with these
functions and GL_STREAM_DRAW was added in GLES 2.0 so this looks like
an old check which was never updated.
Fixes#246