<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-03-10T03:49:07Z</responseDate>
  <request verb="GetRecord" identifier="oai:hiroshima-cu.repo.nii.ac.jp:00001815" metadataPrefix="jpcoar_2.0">https://hiroshima-cu.repo.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:hiroshima-cu.repo.nii.ac.jp:00001815</identifier>
        <datestamp>2023-07-25T10:31:18Z</datestamp>
        <setSpec>52:408</setSpec>
      </header>
      <metadata>
        <jpcoar:jpcoar xmlns:datacite="https://schema.datacite.org/meta/kernel-4/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcndl="http://ndl.go.jp/dcndl/terms/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:jpcoar="https://github.com/JPCOAR/schema/blob/master/2.0/" xmlns:oaire="http://namespace.openaire.eu/schema/oaire/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rioxxterms="http://www.rioxx.net/schema/v2.0/rioxxterms/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="https://github.com/JPCOAR/schema/blob/master/2.0/" xsi:schemaLocation="https://github.com/JPCOAR/schema/blob/master/2.0/jpcoar_scm.xsd">
          <dc:title>Natural Language Dialog System Considering Speaker’s Emotion Calculated from Acoustic Features</dc:title>
          <jpcoar:creator>
            <jpcoar:creatorName>TAKAHASHI, Takumi</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="ja-Kana">タカハシ, タクミ</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>MERA, Kazuya</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="ja-Kana">メラ, カズヤ</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>TANG, Ba Nhat</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>KUROSAWA, Yoshiaki</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="ja-Kana">クロサワ, ヨシアキ</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName>TAKEZAWA, Toshiyuki</jpcoar:creatorName>
            <jpcoar:creatorName xml:lang="ja-Kana">タケザワ, トシユキ</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">高橋, 拓誠</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">目良, 和也</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">黒澤, 義明</jpcoar:creatorName>
          </jpcoar:creator>
          <jpcoar:creator>
            <jpcoar:creatorName xml:lang="en">竹澤, 寿幸</jpcoar:creatorName>
          </jpcoar:creator>
          <dc:rights>Copyright 2017 Springer. This is the author’s version of a work that was accepted for publication in the following source: Takumi Takahashi, Kazuya Mera, Tang Ba Nhat, Yoshiaki Kurosawa, Toshiyuki Takezawa (2017) Natural Language Dialog System Considering Speaker’s Emotion Calculated from Acoustic Features. In Kristiina Jokinen, Graham Wilcock (Eds.) Dialogues with Social Robots : Enablements, Analyses, and Evaluation, Lecture Notes in Electrical Engineering, volume 427, 145-157. The final publication is available at Springer via http://dx.doi.org/10.1007/978-981-10-2585-3_11.</dc:rights>
          <jpcoar:subject subjectScheme="Other">Interactive Voice Response system (IVR)</jpcoar:subject>
          <jpcoar:subject subjectScheme="Other">Acoustic features</jpcoar:subject>
          <jpcoar:subject subjectScheme="Other">Emotion</jpcoar:subject>
          <jpcoar:subject subjectScheme="Other">Support Vector Machine (SVM)</jpcoar:subject>
          <jpcoar:subject subjectScheme="Other">Artificial Intelligence Markup Language (AIML)</jpcoar:subject>
          <datacite:description descriptionType="Other">application/pdf</datacite:description>
          <datacite:description descriptionType="Abstract">With the development of Interactive Voice Response (IVR) systems, people can not only operate computer systems through      task-oriented conversation but also enjoy   non-task-oriented conversation with the computer. When an IVR system generates a  esponse, it usually refers to just verbal information of the user’sutterance. However, when a person gloomily says “I’m fine,” people will respond not by saying “That’s wonderful” but “Really?” or “Are you OK?” because we can consider both verbal and non-verbal information such as tone of voice, facial expressions, gestures, and so on. In this paper, we propose an intelligent IVR system that considers not only verbal but also non-verbal information. To estimate a speaker’s emotion (positive, negative, or neutral), 384 acoustic features extracted from the speaker’s utterance are utilized to machine learning (SVM). Artificial Intelligence Markup Language (AIML)-based response generating rules are expanded to be able to consider the speaker’s emotion. As a result of the experiment, subjects felt that the proposed dialog system was more likable, enjoyable, and did not give machine-like reactions.</datacite:description>
          <datacite:description descriptionType="Other">This paper was previously accepted at the 7th International Workshop on Spoken Dialogue System (IWSDS2016), Saariselkä, Finland, January 13-16, 2016.
This research is supported by JSPS KAKENHI Grant Number 26330313 and the Center of Innovation Program from Japan Science and Technology Agency, JST., 査読有</datacite:description>
          <dc:publisher>Springer</dc:publisher>
          <datacite:date dateType="Issued">2016-12-25</datacite:date>
          <dc:language>eng</dc:language>
          <dc:type rdf:resource="http://purl.org/coar/resource_type/c_5794">conference paper</dc:type>
          <oaire:version rdf:resource="http://purl.org/coar/version/c_ab4af688f83e57aa">AM</oaire:version>
          <jpcoar:identifier identifierType="URI">https://hiroshima-cu.repo.nii.ac.jp/records/1815</jpcoar:identifier>
          <jpcoar:relation>
            <jpcoar:relatedIdentifier identifierType="ISBN">978-981-10-2584-6|978-981-10-2585-3</jpcoar:relatedIdentifier>
          </jpcoar:relation>
          <jpcoar:relation relationType="isVersionOf">
            <jpcoar:relatedIdentifier identifierType="DOI">info:doi/10.1007/978-981-10-2585-3_11</jpcoar:relatedIdentifier>
          </jpcoar:relation>
          <jpcoar:relation>
            <jpcoar:relatedIdentifier identifierType="DOI">http://dx.doi.org/10.1007/978-981-10-2585-3_11</jpcoar:relatedIdentifier>
            <jpcoar:relatedTitle>http://dx.doi.org/10.1007/978-981-10-2585-3_11</jpcoar:relatedTitle>
          </jpcoar:relation>
          <jpcoar:sourceIdentifier identifierType="ISSN">1876-1100</jpcoar:sourceIdentifier>
          <jpcoar:sourceTitle>Lecture Notes in Electrical Engineering</jpcoar:sourceTitle>
          <jpcoar:volume>427</jpcoar:volume>
          <jpcoar:pageStart>145</jpcoar:pageStart>
          <jpcoar:pageEnd>157</jpcoar:pageEnd>
          <jpcoar:file>
            <jpcoar:URI label="IWSDS2016mera.pdf">https://hiroshima-cu.repo.nii.ac.jp/record/1815/files/IWSDS2016mera.pdf</jpcoar:URI>
            <jpcoar:mimeType>application/pdf</jpcoar:mimeType>
            <jpcoar:extent>159.4 kB</jpcoar:extent>
            <datacite:date dateType="Available">2023-03-07</datacite:date>
          </jpcoar:file>
        </jpcoar:jpcoar>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
