Commit graph

6589 commits

Author SHA1 Message Date
j
f0e3c2775e logg addMedia call 2015-10-20 12:17:04 +02:00
j
cc9464082f use shift-[1-0] to switch between item views, fixes #2837 2015-10-13 09:10:48 +01:00
j
d16bbf6ba8 remove facet duplicates 2015-10-12 18:44:20 +01:00
j
c17f0a4376 add new file for r5049 2015-10-12 18:43:41 +01:00
j
aafac3c1d8 only store one item/key/value facet, remove facets with other case 2015-10-12 17:45:08 +02:00
rolux
69254bbe48 changeid can also be list of ids, use text 2015-10-12 15:25:24 +01:00
rolux
4ed2d940cf migrate annotation sequence in item not annotation 2015-10-12 15:24:03 +01:00
j
36ebdf0a1c fix copy of selected annotation via menu 2015-10-06 10:05:54 +03:00
j
7c630ca0b1 inline functoin only gets called once. fixes #2841 2015-10-05 12:53:11 +02:00
j
2e3b61d163 dont fail if files are already gone 2015-10-04 18:17:34 +01:00
j
e761ee692d fix copy clip 2015-10-04 18:13:06 +01:00
j
82549c5d7a copy/paste clips in list order not in selection order 2015-10-04 16:27:48 +01:00
j
9eae0a0762 pass index to split/join to keep position 2015-10-04 16:09:51 +01:00
j
b20a655fa8 fix copy/copyadd/delete of clips via menu 2015-10-04 14:18:29 +01:00
j
6f4c010be0 only return layers defined in config 2015-10-04 14:11:39 +01:00
j
be1589569e fix clip index for newly added clips 2015-10-04 11:04:46 +01:00
j
5649892bbd annotation layer flag is boolean 2015-10-04 11:20:45 +02:00
9265b8a53b Clip.save: fetch annotations once, not ~ 2 * n_layers
With 17 layers and 12 clipLayers, this repeated fetching was around 49%
of the cost of this function, which was in turn 94% of the cost of
creating many new annotations with mostly-unique endpoints. This helps a
bit...

If the order of clipLayers is not meant to be significant to sortvalue
(which I assume it is) then this could be simpler.
2015-10-04 11:17:22 +02:00
j
2a55bd3eec edit paste only supports clips 2015-10-04 11:11:44 +02:00
j
a430c6bdf4 make facets case insensitive 2015-09-25 14:44:02 +01:00
j
1e81dc4fa1 switch 2015-09-24 18:35:13 +01:00
j
79dbeabafc not again 2015-09-24 18:26:59 +01:00
j
eea9321b2e not not not 2015-09-24 18:23:25 +01:00
j
96301d6a9c not 2015-09-24 18:21:01 +01:00
j
f12dfdc4a3 import subtitles if no subtitles exist 2015-09-24 18:16:29 +01:00
j
f790b039da local variable, remove duplicate code 2015-09-22 10:32:02 +01:00
j
1ce1ca7d89 poster keys 2015-09-21 18:31:44 +01:00
j
8bb7ae436f dont fail if layer does not exist 2015-09-20 17:50:23 +01:00
j
5f1d8425a1 tune vm install 2015-09-20 17:50:02 +01:00
j
65fb9ccb6d reduce ffmpeg output 2015-09-16 15:04:44 +01:00
8f3b3036df Support autocomplete from a group of layers
The idea here is to have several layers which share a set of tags. This
mirrors what we already have if several layers reference the same type
of entity. You might have config like this:

        {
            "id": "keywords",
            "title": "Keywords",
            "canAddAnnotations": {"member": true, "staff": true, "admin": true},
            "item": "Keyword",
            "overlap": true,
            "type": "string",
            "autocomplete": true,
            "autocompleteKeys": ["keywords", "minorkeywords"]
        },
        {
            "id": "minorkeywords",
            "title": "Minor Keywords",
            "canAddAnnotations": {"member": true, "staff": true, "admin": true},
            "item": "Keyword",
            "overlap": true,
            "type": "string",
            "autocomplete": true,
            "autocompleteKeys": ["keywords", "minorkeywords"]
        },

Now, adding new keywords in either bin will offer autocompletions from
the union of the two layers. The other option would be to do this on the
server side, but I thought this was a less invasive way to achieve this.
2015-09-14 21:29:02 +02:00
4f064fda76 Make Annotation.public_id non-NULLable (fixes #2829)
This fixes this race:

     request 1                          request 2
     -----------------------------      -------------------------
     addAnnotation(...)
     super(Annotation.self).save()
                                        findAnnotations(...)
                                        returns [{id: null, ...}]
     annotation.public_id = x
     returns {id: x}
2015-09-14 14:18:10 +02:00
eaa07b1ccb ClipManager: match annotation layer case-sensitively (fixes #2832)
The case must be correct anyway for the layer to be found in
settings.CONFIG['layers']. Running this:

    Q(annotation__layer__iexact='foo') &
    Q(annotation__findvalue__icontains='bar')

compiles to

    upper(layer) = upper('foo') and
    ...

which can't use the case-sensitive annotation_annotation_layer index.
This:

    Q(annotation__layer__exact='foo') &
    Q(annotation__findvalue__icontains='bar')

can. (It still can't use the findvalue_like index, though! The other
option is to add indices on upper(layer) and upper(findvalue)
[varchar_pattern_ops].)
2015-09-14 14:13:06 +02:00
da1ad5b9c1 ClipManager.filter_annotations: fix 'opterator' typo (fixes #2832) 2015-09-14 14:11:40 +02:00
8759b569da Cache serialized entities when fetching many annotations
For a scene with ~5600 annotations, of which ~3100 are entities, this
cuts fetching the scene from 12 seconds to 2 seconds.
2015-09-14 14:08:02 +02:00
eebb0b5681 Combine {Item,Clip,edit.Clip}.get_layers()
This has several benefits:

    • Clip.get_layers() (used by smart edits) and Item.get_layers() pick up
    the select_related('user') optimization added for static edits in
    r5007.

    • Static edits and items pick up the optimization from r4941 to select
    annotations once, not once per layer.

Fetching an item with ~1000 annotations took ~1s without this patch,
~0.34s with this patch. Another item with ~6000 annotations took ~11.6s
before, ~8.6s after.

Because this block is moved out to the top:

if user and user.is_anonymous():
user = None

then, for anonymous users,

"editable": false,

is no longer included in the annotations. The old behaviour ended up
including this key in all layers listed before the first private layer
in the config, and leaving it out from later ones. So this new behaviour
is more consistent.
2015-09-14 14:06:43 +02:00
j
861be871d5 dont install avahi-daemon in lxc 2015-09-03 19:57:58 +02:00
j
15da67cfd6 update text too 2015-09-03 19:34:31 +02:00
j
2c406a76e0 create trusty container 2015-09-03 19:33:48 +02:00
j
fd2992c588 add option to sort by number of annotations per layer 2015-09-03 00:52:20 +02:00
ace04688f2 Entity.save(): update annotations async (fixes #2827, kinda) 2015-09-02 14:32:16 +02:00
j
41b50ccdb8 add canPlayClips flag to annotation layers and use those layers to limit playback to clips 2015-08-27 11:27:27 +02:00
j
83013bbe5e Update items when entities are renamed (fixes #2825) 2015-08-26 19:42:03 +02:00
j
944fe1a9dd only run migration if we have items 2015-08-07 17:32:17 +02:00
5418613023 embedTimeline: fix subtitles (fixes #2823) 2015-08-07 13:42:20 +02:00
3da3bd37fd addClips: return error if item/in/out missing, not 500 2015-08-07 13:37:56 +02:00
j
819181726a slightly faster json serialization of annotations 2015-08-02 16:22:45 +02:00
f3fdded07d Edit.json: preload annotation users
The expensive part of fetching an edit is JSONifying the clips'
annotations. Profiling showed that the main cost was Annotation.json(),
and within that:

File: /srv/pandora/pandora/annotation/models.py
Function: json at line 216

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
216                                               def json(self, layer=False, keys=None, user=None):
217       632          827      1.3      0.1          j = {
218       632      1048170   1658.5     89.6              'user': self.user.username,
219                                                   }

Obviously this join just moves some of the cost further out, but it
brings my micro-benchmark down from 1.3s to 0.3s.
2015-08-02 16:02:47 +02:00
ab5a20d3a2 Treat findEdits({}) like findEdits({query: {}}) (fixes #2820) 2015-07-22 21:37:55 +02:00
4c0652e683 errorlogsDialog: fix searching text (fixes #2819) 2015-07-22 21:37:04 +02:00