{"id":5748,"date":"2021-04-19T09:15:15","date_gmt":"2021-04-19T09:15:15","guid":{"rendered":"http:\/\/20.186.34.190\/?p=5748"},"modified":"2021-04-22T10:10:20","modified_gmt":"2021-04-22T10:10:20","slug":"fluent-speech-commands-a-dataset-for-spoken-language-understanding-research","status":"publish","type":"post","link":"https:\/\/fluent.ai\/fr\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/","title":{"rendered":"Fluent Speech Commands: A dataset for spoken language understanding research"},"content":{"rendered":"<div class=\"\u201dentry-meta\u201d\"><a class=\"\u201dentry-date\">April 3, 2020<\/a><\/div>\n<p>In recent years, with the advent of deep neural networks, the accuracy of speech recognition models have been notably improved which have made possible the production of speech-to-text systems that can accurately transcribe speech even in difficult scenarios such as noisy environments, spontaneous speech or high variability (speaking rate, accents, etc.). However, spoken language understanding (SLU) is still an open problem. Modern systems are far from being able to correctly interpret the meaning of the words uttered by a user unless the domain is highly constrained. That means that a user\u2019s intents can be only expressed in a very limited number of ways or, in other words, using a simplified version of the language.<\/p>\n<p>One of the most straightforward applications of SLU is the development of vocal interfaces for controlling different types of devices. Cellphones, smart homes, or intelligent cars are just some examples of this. Although this kind of interface is already available, they are usually constrained. Users must say the exact words or phrases on which the system has been trained in order to guarantee a high recognition accuracy. This scenario is usually frustrating for users since they have to memorize the commands to be able to use the system properly. To overcome this problem, systems should support natural language interaction and be able to deal with several variations of each intent or command. Therefore, the user can employ multiple wordings or paraphrases to interact with the interface which greatly facilitates the interaction process itself.<\/p>\n<p>Releasing Fluent Speech Commands dataset<\/p>\n<p>At Fluent.ai, our primary research is focused on end-to-end SLU, i.e., directly extracting the intent from speech without converting it to text first. This is somewhat similar to how humans do speech recognition. Such SLU models have caught the attention of others in the research community in recent years. However, there are not many SLU datasets readily available to the research community. Most of the available datasets are either closed source or too small. The lack of a good open-source dataset for SLU makes it impossible for most people to perform high-quality, reproducible research on this topic. To solve this problem, we created a new SLU dataset, the \u201cFluent Speech Commands\u201d dataset. Specifically, Fluent Speech Commands can be employed to train and test a system able to recognize a set of spoken commands to interact with a typical voice assistant in a smart home scenario with various different wordings.<\/p>\n<p>The Fluent Speech Commands dataset contains 30,043 utterances from 97 speakers. It is recorded as 16 kHz single-channel .wav files each containing a single utterance used for controlling smart-home appliances or virtual assistant, for example, \u201cput on the music\u201d or \u201cturn up the heat in the kitchen\u201d. Each audio is labeled with three slots: action, object, and location. A slot takes on one of the multiple values: for instance, the \u201clocation\u201d slot can take on the values \u201cnone\u201d, \u201ckitchen\u201d, \u201cbedroom\u201d, or \u201cwashroom\u201d. We refer to the combination of slot values as the intent of the utterance. For each intent, there are multiple possible wordings: for example, the intent {action: \u201cactivate\u201d, object: \u201clights\u201d, location: \u201cnone\u201d} can be expressed as \u201cturn on the lights\u201d, \u201cswitch the lights on\u201d, \u201clights on\u201d, etc. The dataset has a total of 248 phrasing mapping to 31 unique intents. The demographic information about these anonymized speakers (age range, gender, speaking ability, etc.) is included along with the dataset. The utterances are randomly divided into train, valid, and test splits in such a way that no speaker appears in more than one split. Each split contains all possible wordings for each intent, though our code has the option to include data for only certain wordings for different sets, to test the model\u2019s ability to recognize wordings not seen during training. The dataset has a .csv file for each split that lists the speaker ID, file path, transcription, and slots for all the .wav files in that split. The splits are tabulated below:<br \/>\n<div id=\"tm-row-6a073bc49d6e3\" class=\"vc_row vc_row-outer vc_row-fluid\"><div id=\"tm-column-6a073bc49d940\" class=\"wpb_column vc_column_container vc_col-sm-12\"><div class=\"vc_column-inner\"><div class=\"wpb_wrapper\"><div id=\"tm-row-inner-6a073bc49db28\" class=\"vc_row vc_inner vc_row-fluid\"><div id=\"tm-column-inner-6a073bc49dcd3\" class=\"wpb_column vc_column_container vc_col-sm-6\"><div class=\"vc_column-inner\"><div class=\"wpb_wrapper\"><div class=\"tm-services-list  style-02 tm-animation move-up\" id=\"tm-services-list-6a073bc49de0b\">\n\t\n\t\t<div class=\"service-grid\">\n\t\t\t\t\t\t\t<div class=\"service-item\">\n\t\t\t\t\t<div class=\"service-image\">\n\t\t\t\t\t\t<div class=\"inner\">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"blend-image\"\n\t\t\t\t\t\t\t     style=\"background-image: url()\"><\/div>\n\t\t\t\t\t\t\t<div class=\"blend-bg\"><\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div class=\"service-info\">\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t<h3 class=\"service-name\">\n\t\t\t\t\t\t\t\tSplit\t\t\t\t\t\t\t<\/h3>\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"service-text\">\n\t\t\t\t\t\t\t\tTrain<br \/>\nValid<br \/>\nTest\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"service-item\">\n\t\t\t\t\t<div class=\"service-image\">\n\t\t\t\t\t\t<div class=\"inner\">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"blend-image\"\n\t\t\t\t\t\t\t     style=\"background-image: url()\"><\/div>\n\t\t\t\t\t\t\t<div class=\"blend-bg\"><\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div class=\"service-info\">\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t<h3 class=\"service-name\">\n\t\t\t\t\t\t\t\t# of speakers\t\t\t\t\t\t\t<\/h3>\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"service-text\">\n\t\t\t\t\t\t\t\t77<br \/>\n10<br \/>\n10\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"service-item\">\n\t\t\t\t\t<div class=\"service-image\">\n\t\t\t\t\t\t<div class=\"inner\">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"blend-image\"\n\t\t\t\t\t\t\t     style=\"background-image: url()\"><\/div>\n\t\t\t\t\t\t\t<div class=\"blend-bg\"><\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t<div class=\"service-info\">\n\n\t\t\t\t\t\t\t\t\t\t\t\t\t<h3 class=\"service-name\">\n\t\t\t\t\t\t\t\t# of utterances\t\t\t\t\t\t\t<\/h3>\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"service-text\">\n\t\t\t\t\t\t\t\t23,132<br \/>\n3,118<br \/>\n3,793\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\n\t<\/div>\n<\/div><\/div><\/div><div id=\"tm-column-inner-6a073bc49dfa5\" class=\"wpb_column vc_column_container vc_col-sm-6\"><div class=\"vc_column-inner\"><div class=\"wpb_wrapper\"><\/div><\/div><\/div><\/div>\n<p>We are releasing this dataset for academic research only. It is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license. We really hope that the research community can find this dataset useful.<\/p>\n\n<div class=\"tm-heading  left tm-animation move-up\" id=\"tm-heading-6a073bc49e19f\">\n\t<h5 class=\"heading\" style=\"\"><a href=\"https:\/\/groups.google.com\/a\/fluent.ai\/forum\/#!forum\/fluent-speech-commands\">Access the dataset here<\/a><\/h5><\/div>\n\n\n<p><b>License: <\/b>This work is released strictly for academic research only. The dataset, in whole or in part, is not authorized to be used for any commercial purpose, including training, testing, bench-marking, or developing a product. Full license is available <a href=\"\/wp-content\/uploads\/2021\/04\/Fluent_Speech_Commands_Public_License.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">at this link<\/a>.<\/p>\n<p><\/div><\/div><\/div><br \/>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>In recent years, with the advent of deep neural networks, the accuracy of speech recognition models have been notably improved which have made possible the production of speech-to-text systems that can accurately transcribe speech even in difficult scenarios such as noisy environments, spontaneous speech or high variability (speaking rate, accents, etc.). However, spoken language understanding (SLU) is still an open problem. Modern systems are far from being able to correctly interpret the meaning of the words uttered by a user unless the domain is highly constrained. That means that a user\u2019s intents can be only expressed in a very limited number of ways or, in other words, using a simplified version of the language.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[59],"tags":[],"class_list":["post-5748","post","type-post","status-publish","format-standard","hentry","category-fluent-ai","post-no-thumbnail"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.6 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Fluent Speech Commands: A dataset for spoken language understanding research - Fluent.ai<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/fluent.ai\/fr\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/\" \/>\n<meta property=\"og:locale\" content=\"fr_CA\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Fluent Speech Commands: A dataset for spoken language understanding research - Fluent.ai\" \/>\n<meta property=\"og:description\" content=\"In recent years, with the advent of deep neural networks, the accuracy of speech recognition models have been notably improved which have made possible the production of speech-to-text systems that can accurately transcribe speech even in difficult scenarios such as noisy environments, spontaneous speech or high variability (speaking rate, accents, etc.). However, spoken language understanding (SLU) is still an open problem. Modern systems are far from being able to correctly interpret the meaning of the words uttered by a user unless the domain is highly constrained. That means that a user\u2019s intents can be only expressed in a very limited number of ways or, in other words, using a simplified version of the language.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/fluent.ai\/fr\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/\" \/>\n<meta property=\"og:site_name\" content=\"Fluent.ai\" \/>\n<meta property=\"article:published_time\" content=\"2021-04-19T09:15:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-04-22T10:10:20+00:00\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@FluentAI\" \/>\n<meta name=\"twitter:site\" content=\"@FluentAI\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimation du temps de lecture\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/fluent.ai\\\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/fluent.ai\\\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\\\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\\\/\\\/fluent.ai\\\/#\\\/schema\\\/person\\\/97f13caf14d24d19c0b12a09636866ff\"},\"headline\":\"Fluent Speech Commands: A dataset for spoken language understanding research\",\"datePublished\":\"2021-04-19T09:15:15+00:00\",\"dateModified\":\"2021-04-22T10:10:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/fluent.ai\\\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\\\/\"},\"wordCount\":861,\"articleSection\":[\"Fluent.ai\"],\"inLanguage\":\"fr-CA\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/fluent.ai\\\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\\\/\",\"url\":\"https:\\\/\\\/fluent.ai\\\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\\\/\",\"name\":\"Fluent Speech Commands: A dataset for spoken language understanding research - Fluent.ai\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/fluent.ai\\\/#website\"},\"datePublished\":\"2021-04-19T09:15:15+00:00\",\"dateModified\":\"2021-04-22T10:10:20+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/fluent.ai\\\/#\\\/schema\\\/person\\\/97f13caf14d24d19c0b12a09636866ff\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/fluent.ai\\\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\\\/#breadcrumb\"},\"inLanguage\":\"fr-CA\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/fluent.ai\\\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/fluent.ai\\\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/fluent.ai\\\/fr\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Fluent Speech Commands: A dataset for spoken language understanding research\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/fluent.ai\\\/#website\",\"url\":\"https:\\\/\\\/fluent.ai\\\/\",\"name\":\"Fluent.ai\",\"description\":\"Voice Enabling The World&#039;s Devices\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/fluent.ai\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-CA\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/fluent.ai\\\/#\\\/schema\\\/person\\\/97f13caf14d24d19c0b12a09636866ff\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-CA\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/9ebe501e86fa1ef62904aac1df20c425d97bee18f13d3aa4866874b1a352b77d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/9ebe501e86fa1ef62904aac1df20c425d97bee18f13d3aa4866874b1a352b77d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/9ebe501e86fa1ef62904aac1df20c425d97bee18f13d3aa4866874b1a352b77d?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"url\":\"https:\\\/\\\/fluent.ai\\\/fr\\\/author\\\/admin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Fluent Speech Commands: A dataset for spoken language understanding research - Fluent.ai","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/fluent.ai\/fr\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/","og_locale":"fr_CA","og_type":"article","og_title":"Fluent Speech Commands: A dataset for spoken language understanding research - Fluent.ai","og_description":"In recent years, with the advent of deep neural networks, the accuracy of speech recognition models have been notably improved which have made possible the production of speech-to-text systems that can accurately transcribe speech even in difficult scenarios such as noisy environments, spontaneous speech or high variability (speaking rate, accents, etc.). However, spoken language understanding (SLU) is still an open problem. Modern systems are far from being able to correctly interpret the meaning of the words uttered by a user unless the domain is highly constrained. That means that a user\u2019s intents can be only expressed in a very limited number of ways or, in other words, using a simplified version of the language.","og_url":"https:\/\/fluent.ai\/fr\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/","og_site_name":"Fluent.ai","article_published_time":"2021-04-19T09:15:15+00:00","article_modified_time":"2021-04-22T10:10:20+00:00","author":"admin","twitter_card":"summary_large_image","twitter_creator":"@FluentAI","twitter_site":"@FluentAI","twitter_misc":{"\u00c9crit par":false,"Estimation du temps de lecture":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/fluent.ai\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/#article","isPartOf":{"@id":"https:\/\/fluent.ai\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/"},"author":{"name":"admin","@id":"https:\/\/fluent.ai\/#\/schema\/person\/97f13caf14d24d19c0b12a09636866ff"},"headline":"Fluent Speech Commands: A dataset for spoken language understanding research","datePublished":"2021-04-19T09:15:15+00:00","dateModified":"2021-04-22T10:10:20+00:00","mainEntityOfPage":{"@id":"https:\/\/fluent.ai\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/"},"wordCount":861,"articleSection":["Fluent.ai"],"inLanguage":"fr-CA"},{"@type":"WebPage","@id":"https:\/\/fluent.ai\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/","url":"https:\/\/fluent.ai\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/","name":"Fluent Speech Commands: A dataset for spoken language understanding research - Fluent.ai","isPartOf":{"@id":"https:\/\/fluent.ai\/#website"},"datePublished":"2021-04-19T09:15:15+00:00","dateModified":"2021-04-22T10:10:20+00:00","author":{"@id":"https:\/\/fluent.ai\/#\/schema\/person\/97f13caf14d24d19c0b12a09636866ff"},"breadcrumb":{"@id":"https:\/\/fluent.ai\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/#breadcrumb"},"inLanguage":"fr-CA","potentialAction":[{"@type":"ReadAction","target":["https:\/\/fluent.ai\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/fluent.ai\/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/fluent.ai\/fr\/"},{"@type":"ListItem","position":2,"name":"Fluent Speech Commands: A dataset for spoken language understanding research"}]},{"@type":"WebSite","@id":"https:\/\/fluent.ai\/#website","url":"https:\/\/fluent.ai\/","name":"Fluent.ai","description":"Voice Enabling The World&#039;s Devices","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/fluent.ai\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-CA"},{"@type":"Person","@id":"https:\/\/fluent.ai\/#\/schema\/person\/97f13caf14d24d19c0b12a09636866ff","name":"admin","image":{"@type":"ImageObject","inLanguage":"fr-CA","@id":"https:\/\/secure.gravatar.com\/avatar\/9ebe501e86fa1ef62904aac1df20c425d97bee18f13d3aa4866874b1a352b77d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/9ebe501e86fa1ef62904aac1df20c425d97bee18f13d3aa4866874b1a352b77d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/9ebe501e86fa1ef62904aac1df20c425d97bee18f13d3aa4866874b1a352b77d?s=96&d=mm&r=g","caption":"admin"},"url":"https:\/\/fluent.ai\/fr\/author\/admin\/"}]}},"_links":{"self":[{"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/posts\/5748","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/comments?post=5748"}],"version-history":[{"count":0,"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/posts\/5748\/revisions"}],"wp:attachment":[{"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/media?parent=5748"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/categories?post=5748"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fluent.ai\/fr\/wp-json\/wp\/v2\/tags?post=5748"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}