{"id":6427,"date":"2024-01-20T18:47:51","date_gmt":"2024-01-20T17:47:51","guid":{"rendered":"https:\/\/olivier.hoarau.site\/?p=6427"},"modified":"2024-01-20T18:47:51","modified_gmt":"2024-01-20T17:47:51","slug":"les-timelines-multiples-sous-kdenlive-et-reconnaissance-vocale-avec-whisper","status":"publish","type":"post","link":"https:\/\/olivier.hoarau.org\/?p=6427","title":{"rendered":"Les timelines multiples sous Kdenlive et reconnaissance vocale avec whisper"},"content":{"rendered":"\n<p>La version 23.04 du logiciel de montage vid\u00e9o opensource <a href=\"https:\/\/www.funix.org\/fr\/linux\/index.php?ref=kdenlive2\">Kdenlive<\/a> avait emmen\u00e9 l&rsquo;\u00e9volution majeure des timelines multiples ou s\u00e9quences (nested timelines en anglais) comme je l&rsquo;avais rapport\u00e9 dans ce <a href=\"https:\/\/olivier.hoarau.org\/?p=6323\">post<\/a>.<\/p>\n\n\n\n<p>Je l&rsquo;utilise maintenant sans cesse et il me semblait utile de vous pr\u00e9senter plus en avant cette fonctionnalit\u00e9 dans un tutoriel vid\u00e9o que voici.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-prise-en-charge-des-contenus-embarqu-s wp-block-embed-prise-en-charge-des-contenus-embarqu-s wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n <span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe loading=\"lazy\" class=\"youtube-player\" width=\"474\" height=\"267\" src=\"https:\/\/www.youtube.com\/embed\/FDT1wrYn1pg?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=fr-FR&#038;autohide=2&#038;wmode=transparent\" allowfullscreen=\"true\" style=\"border:0;\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox\"><\/iframe><\/span> \n<\/div><\/figure>\n\n\n\n<p>\u00c7a m&rsquo;a permis de tester la fonctionnalit\u00e9 <a href=\"https:\/\/openai.com\/research\/whisper\">Whisper<\/a> bas\u00e9e sur l&rsquo;intelligence artificielle qui permet de faire du sous titrage automatique et m\u00eame de la traduction en anglais.  Elle se base sur <a href=\"https:\/\/pytorch.org\/\">PyTorch<\/a> qui est une biblioth\u00e8que opensource \u00e9crite en <strong>python<\/strong>  d\u00a0\u00bbapprentissage profond (deep learning) qui peut utiliser le GPU s&rsquo;il est compatible.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Pour installer <strong>PyTorch<\/strong> avec une carte graphique NVIDIA, il a fallu trouver la version du <a href=\"https:\/\/fr.wikipedia.org\/wiki\/Compute_Unified_Device_Architecture\">CUDA<\/a> je me suis m\u00e9lang\u00e9 les pinceaux avec les commandes <strong>nvcc-version<\/strong> et <strong>nvidia-smi <\/strong>qui me donnent des r\u00e9sultats discordants, finalement j&rsquo;ai mis le num\u00e9ro de version indiqu\u00e9 sur le site <a href=\"https:\/\/pytorch.org\/\">PyTorch<\/a>.<\/p>\n\n\n<div class=\"wp-block-wab-pastacode\">\n\t<div class=\"code-embed-wrapper\"> <pre class=\"language-markup code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-markup code-embed-code\">pip install torch torchvision torchaudio --index-url https:\/\/download.pytorch.org\/whl\/cu11.8<\/code><\/pre> <div class=\"code-embed-infos\"> <\/div> <\/div><\/div>\n\n\n\n<p>Pour savoir si tout est bien install\u00e9 et pr\u00eat \u00e0 l&rsquo;usage, on tape la commande:<\/p>\n\n\n<div class=\"wp-block-wab-pastacode\">\n\t<div class=\"code-embed-wrapper\"> <pre class=\"language-markup code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-markup code-embed-code\">python -m torch.utils.collect_env<\/code><\/pre> <div class=\"code-embed-infos\"> <\/div> <\/div><\/div>\n\n\n\n<p>Et voil\u00e0 le r\u00e9sultat<\/p>\n\n\n<div class=\"wp-block-wab-pastacode\">\n\t<div class=\"code-embed-wrapper\"> <pre class=\"language-markup code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-markup code-embed-code\">Collecting environment information...<br\/>\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/cuda\/__init__.py:190: UserWarning: <br\/>    Found GPU0 NVIDIA GeForce GTX 760 (192-bit) which is of cuda capability 3.0.<br\/>    PyTorch no longer supports this GPU because it is too old.<br\/>    The minimum cuda capability supported by this library is 3.7.<br\/>    <br\/>  warnings.warn(<br\/>PyTorch version: 2.1.2 cu118<br\/>Is debug build: False<br\/>CUDA used to build PyTorch: 11.8<br\/>ROCM used to build PyTorch: N\/A<br\/><br\/>OS: Mageia 9 (x86_64)<br\/>GCC version: (Mageia 12.3.0-3.mga9) 12.3.0<br\/>Clang version: Could not collect<br\/>CMake version: version 3.26.4<br\/>Libc version: glibc-2.36<br\/><br\/>Python version: 3.10.11 (main, Apr 16 2023, 03:21:15) [GCC 12.2.1 20230415] (64-bit runtime)<br\/>Python platform: Linux-6.5.13-desktop-6.mga9-x86_64-with-glibc2.36<br\/>Is CUDA available: True<br\/>CUDA runtime version: 12.1.105<br\/>CUDA_MODULE_LOADING set to: LAZY<br\/>GPU models and configuration: GPU 0: NVIDIA GeForce GTX 760 (192-bit)<br\/>Nvidia driver version: 470.223.02<br\/>cuDNN version: Could not collect<br\/>HIP runtime version: N\/A<br\/>MIOpen runtime version: N\/A<br\/>Is XNNPACK available: True<br\/><br\/>CPU:<br\/>Architecture\u00a0:                          x86_64<br\/>Mode(s) op\u00e9ratoire(s) des processeurs\u00a0: 32-bit, 64-bit<br\/>Tailles des adresses:                   39 bits physical, 48 bits virtual<br\/>Boutisme\u00a0:                              Little Endian<br\/>Processeur(s)\u00a0:                         8<br\/>Liste de processeur(s) en ligne\u00a0:       0-7<br\/>Identifiant constructeur\u00a0:              GenuineIntel<br\/>Nom de mod\u00e8le\u00a0:                         Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz<br\/>Famille de processeur\u00a0:                 6<br\/>Mod\u00e8le\u00a0:                                60<br\/>Thread(s) par c\u0153ur\u00a0:                    2<br\/>C\u0153ur(s) par socket\u00a0:                    4<br\/>Socket(s)\u00a0:                             1<br\/>R\u00e9vision\u00a0:                              3<br\/>multiplication des MHz du\/des CPU(s)\u00a0:  93%<br\/>Vitesse maximale du processeur en MHz\u00a0: 4000,0000<br\/>Vitesse minimale du processeur en MHz\u00a0: 800,0000<br\/>BogoMIPS\u00a0:                              7183,94<br\/>Drapeaux\u00a0:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_<br\/>good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm c<br\/>puid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts vnmi md_clear flush_l1d<br\/>Virtualisation\u00a0:                        VT-x<br\/>Cache L1d\u00a0:                             128 KiB (4 instances)<br\/>Cache L1i\u00a0:                             128 KiB (4 instances)<br\/>Cache L2\u00a0:                              1 MiB (4 instances)<br\/>Cache L3\u00a0:                              8 MiB (1 instance)<br\/>N\u0153ud(s) NUMA\u00a0:                          1<br\/>N\u0153ud NUMA\u00a00 de processeur(s)\u00a0:          0-7<br\/>Vuln\u00e9rabilit\u00e9 Gather data sampling\u00a0:    Not affected<br\/>Vuln\u00e9rabilit\u00e9 Itlb multihit\u00a0:           KVM: Mitigation: VMX disabled<br\/>Vuln\u00e9rabilit\u00e9 L1tf\u00a0:                    Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable<br\/>Vuln\u00e9rabilit\u00e9 Mds\u00a0:                     Mitigation; Clear CPU buffers; SMT vulnerable<br\/>Vuln\u00e9rabilit\u00e9 Meltdown\u00a0:                Mitigation; PTI<br\/>Vuln\u00e9rabilit\u00e9 Mmio stale data\u00a0:         Unknown: No mitigations<br\/>Vuln\u00e9rabilit\u00e9 Retbleed\u00a0:                Not affected<br\/>Vuln\u00e9rabilit\u00e9 Spec rstack overflow\u00a0:    Not affected<br\/>Vuln\u00e9rabilit\u00e9 Spec store bypass\u00a0:       Mitigation; Speculative Store Bypass disabled via prctl<br\/>Vuln\u00e9rabilit\u00e9 Spectre v1\u00a0:              Mitigation; usercopy\/swapgs barriers and __user pointer sanitization<br\/>Vuln\u00e9rabilit\u00e9 Spectre v2\u00a0:              Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected<br\/>Vuln\u00e9rabilit\u00e9 Srbds\u00a0:                   Mitigation; Microcode<br\/>Vuln\u00e9rabilit\u00e9 Tsx async abort\u00a0:         Not affected<br\/><br\/>Versions of relevant libraries:<br\/>[pip3] numpy==1.23.5<br\/>[pip3] torch==2.1.2<br\/>[pip3] torchaudio==2.1.2 cu118<br\/>[pip3] torchvision==0.16.2 cu118<br\/>[pip3] triton==2.1.0<\/code><\/pre> <div class=\"code-embed-infos\"> <\/div> <\/div><\/div>\n\n\n\n<p>Les premi\u00e8res lignes me font craindre le pire, mais je tente le coup sous <strong>Kdenlive<\/strong>.  Dans les param\u00e8tres de configuration de <strong>kdenlive<\/strong> je choisis <strong>Whisper<\/strong> puis au niveau de <strong>Device<\/strong> ma carte graphique Nvidia.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/i0.wp.com\/olivier.hoarau.org\/wp-content\/uploads\/whisper-gpu.jpg?ssl=1\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" width=\"474\" height=\"324\" src=\"https:\/\/i0.wp.com\/olivier.hoarau.org\/wp-content\/uploads\/whisper-gpu.jpg?resize=474%2C324&#038;ssl=1\" alt=\"\" class=\"wp-image-6429\" srcset=\"https:\/\/i0.wp.com\/olivier.hoarau.org\/wp-content\/uploads\/whisper-gpu.jpg?resize=1024%2C700&amp;ssl=1 1024w, https:\/\/i0.wp.com\/olivier.hoarau.org\/wp-content\/uploads\/whisper-gpu.jpg?resize=300%2C205&amp;ssl=1 300w, https:\/\/i0.wp.com\/olivier.hoarau.org\/wp-content\/uploads\/whisper-gpu.jpg?resize=768%2C525&amp;ssl=1 768w, https:\/\/i0.wp.com\/olivier.hoarau.org\/wp-content\/uploads\/whisper-gpu.jpg?w=1061&amp;ssl=1 1061w, https:\/\/i0.wp.com\/olivier.hoarau.org\/wp-content\/uploads\/whisper-gpu.jpg?w=948&amp;ssl=1 948w\" sizes=\"auto, (max-width: 474px) 100vw, 474px\" \/><\/a><\/figure>\n\n\n\n<p>Et je lance la cr\u00e9ation d&rsquo;un sous titre automatique \u00e0 partir de la commande <strong>Speech Recognition<\/strong>, mais malheureusement je n&rsquo;irai pas plus loin que cette erreur<\/p>\n\n\n<div class=\"wp-block-wab-pastacode\">\n\t<div class=\"code-embed-wrapper\"> <pre class=\"language-markup code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-markup code-embed-code\">\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/cuda\/__init__.py:190: UserWarning: <br\/>    Found GPU0 NVIDIA GeForce GTX 760 (192-bit) which is of cuda capability 3.0.<br\/>    PyTorch no longer supports this GPU because it is too old.<br\/>    The minimum cuda capability supported by this library is 3.7.<br\/>    <br\/>  warnings.warn(<br\/>Traceback (most recent call last):<br\/>  File &quot;\/usr\/share\/kdenlive\/scripts\/whispertosrt.py&quot;, line 54, in &lt;module&gt;<br\/>    sys.exit(main())<br\/>  File &quot;\/usr\/share\/kdenlive\/scripts\/whispertosrt.py&quot;, line 33, in main<br\/>    result = whispertotext.run_whisper(source, model, device, task, language)<br\/>  File &quot;\/var\/share\/kdenlive\/scripts\/whispertotext.py&quot;, line 53, in run_whisper<br\/>    model = whisper.load_model(model, device)<br\/>  File &quot;\/home\/olivier\/.local\/lib\/python3.10\/site-packages\/whisper\/__init__.py&quot;, line 146, in load_model<br\/>    checkpoint = torch.load(fp, map_location=device)<br\/>  File &quot;\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/serialization.py&quot;, line 1014, in load<br\/>    return _load(opened_zipfile,<br\/>  File &quot;\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/serialization.py&quot;, line 1422, in _load<br\/>    result = unpickler.load()<br\/>  File &quot;\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/serialization.py&quot;, line 1392, in persistent_load<br\/>    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))<br\/>  File &quot;\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/serialization.py&quot;, line 1366, in load_tensor<br\/>    wrap_storage=restore_location(storage, location),<br\/>  File &quot;\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/serialization.py&quot;, line 1296, in restore_location<br\/>    return default_restore_location(storage, map_location)<br\/>  File &quot;\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/serialization.py&quot;, line 381, in default_restore_location<br\/>    result = fn(storage, location)<br\/>  File &quot;\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/serialization.py&quot;, line 279, in _cuda_deserialize<br\/>    return obj.cuda(device)<br\/>  File &quot;\/usr\/local\/lib64\/python3.10\/site-packages\/torch\/_utils.py&quot;, line 114, in _cuda<br\/>    untyped_storage = torch.UntypedStorage(<br\/>RuntimeError: NVML_SUCCESS == r INTERNAL ASSERT FAILED at &quot;..\/c10\/cuda\/CUDACachingAllocator.cpp&quot;:1154, please report a bug to PyTorch. <\/code><\/pre> <div class=\"code-embed-infos\"> <\/div> <\/div><\/div>\n\n\n\n<p>\u00c7a marche en revanche je repasse en mode CPU<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/www.funix.org\/fr\/linux\/images\/video\/montage\/kdeenlive\/19-4\/whisper3.jpg?w=474&#038;ssl=1\" alt=\"\"\/><\/figure>\n\n\n\n<p>Le r\u00e9sultat est plut\u00f4t pas mal quand on choisit le fran\u00e7ais avec une bonne diction en revanche la traduction automatique n&rsquo;a pas march\u00e9.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>La version 23.04 du logiciel de montage vid\u00e9o opensource Kdenlive avait emmen\u00e9 l&rsquo;\u00e9volution majeure des timelines multiples ou s\u00e9quences (nested timelines en anglais) comme je l&rsquo;avais rapport\u00e9 dans ce post. Je l&rsquo;utilise maintenant sans cesse et il me semblait utile de vous pr\u00e9senter plus en avant cette fonctionnalit\u00e9 dans un tutoriel vid\u00e9o que voici. \u00c7a &hellip; <a href=\"https:\/\/olivier.hoarau.org\/?p=6427\" class=\"more-link\">Continuer la lecture de <span class=\"screen-reader-text\">Les timelines multiples sous Kdenlive et reconnaissance vocale avec whisper<\/span>  <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_post_was_ever_published":false,"_share_on_mastodon":"0"},"categories":[5],"tags":[26],"class_list":["post-6427","post","type-post","status-publish","format-standard","hentry","category-logiciels-libres","tag-kdenlive"],"share_on_mastodon":{"url":"https:\/\/mastodon.social\/@funix\/111789531046962121","error":""},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/peOjJ-1FF","jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=\/wp\/v2\/posts\/6427","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6427"}],"version-history":[{"count":2,"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=\/wp\/v2\/posts\/6427\/revisions"}],"predecessor-version":[{"id":6430,"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=\/wp\/v2\/posts\/6427\/revisions\/6430"}],"wp:attachment":[{"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6427"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6427"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/olivier.hoarau.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6427"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}