Nothing Special   »   [go: up one dir, main page]

CN107818798A - Customer service quality evaluating method, device, equipment and storage medium - Google Patents

Customer service quality evaluating method, device, equipment and storage medium Download PDF

Info

Publication number
CN107818798A
CN107818798A CN201710984748.3A CN201710984748A CN107818798A CN 107818798 A CN107818798 A CN 107818798A CN 201710984748 A CN201710984748 A CN 201710984748A CN 107818798 A CN107818798 A CN 107818798A
Authority
CN
China
Prior art keywords
specified time
time section
fundamental frequency
user
threshold value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710984748.3A
Other languages
Chinese (zh)
Other versions
CN107818798B (en
Inventor
于静磊
张仕梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710984748.3A priority Critical patent/CN107818798B/en
Publication of CN107818798A publication Critical patent/CN107818798A/en
Application granted granted Critical
Publication of CN107818798B publication Critical patent/CN107818798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Child & Adolescent Psychology (AREA)
  • Quality & Reliability (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the invention discloses a kind of customer service quality evaluating method, device, equipment and storage medium, wherein this method includes:In user and customer service communication process, fundamental frequency information and word speed information of the user in real voice in each specified time section;Emotional state of the user in each specified time section is determined according to the fundamental frequency information of each specified time section and word speed information respectively;According to user's degree of accuracy that the emotional state of all specified time sections and customer service are answered in communication process, service quality evaluation result is generated.The embodiment of the present invention determines emotional state of the user in the period, obtained mood result is more reliable objective according to every section of user speech in real time in user and customer service communication process;It can complete to evaluate at the end of communication based on phonetic feature, obtain comprehensive objective service quality evaluation, customer service can know evaluation result in time, and effectively reference is provided for customer service work adjustment.

Description

Customer service quality evaluating method, device, equipment and storage medium
Technical field
The present embodiments relate to customer service assessment technique, more particularly to a kind of customer service quality evaluating method, device, set Standby and storage medium.
Background technology
For any one industry or a company, customer service is all essential, and customer service assume responsibility for new product Recommendation, the maintenance of frequent customer, the responsibility such as answer questions, contact staff with user during linking up, if meets user's Demand, if allow user to be satisfied with, be directly connected to the achievement of company.
At present, most of industry such as bank, the communications industry etc., after contact staff and user telephony communication terminate, typically Evaluation of the user to contact staff's service quality is solicited using voice mode or short message mode.Voice mode refers in telephonic communication Terminate and play simultaneous voice when not hanging up the telephone, user can be selected according to voice message to express the evaluation of oneself. Short message mode refers to send short message, after user receives short message, answer short message to user by system after telephonic communication terminates To feed back the evaluation of oneself,
Fig. 1 is the framework and schematic flow sheet of the customer service evaluation of prior art, as shown in figure 1, user squeezes into phone, customer service Personnel connect phone, and user is linked up with contact staff, and after communication, system initiates evaluation rubric (for example, playing synchronous Voice sends short message), if system receives the evaluation of user, evaluation statistics is carried out, if system does not receive use The evaluation at family, then directly terminate flow.
Following defect be present in above-mentioned evaluation method:
(1) hysteresis quality of evaluation:Evaluation rubric is initiated after contact staff and user telephony communication terminate, contact staff is not The psychic feedback of user can be obtained in the very first time, it is impossible to adjust communication way in time;
(2) subjectivity of evaluation:User often optionally provides or poor very much in Feedback Evaluation information, can not The adjustment that works customer service provides effectively reference;
(3) evaluation is incomprehensive:User can select to evaluate or do not evaluate, it is impossible to which it is anti-truly comprehensively to receive user Feedback.
In addition, other evaluation methods also be present:After end of conversation, mood is judged based on calling record, still Contact staff can not still obtain the psychic feedback of user in the very first time, it is impossible to adjust communication way in time, and what is obtained comments Valency result is not accurate enough.
The content of the invention
The embodiment of the present invention provides a kind of customer service quality evaluating method, device, equipment and storage medium, being capable of basis Phonetic feature is objective to obtain service quality evaluation in real time, and obtained evaluation is more accurate comprehensive, is carried for customer service work adjustment It is provided with effect reference.
In a first aspect, the embodiments of the invention provide a kind of customer service quality evaluating method, including:
In user and customer service communication process, fundamental frequency information and word speed of the user in real voice in each specified time section Information;
Respectively according to the fundamental frequency information and word speed information of each specified time section, determine the user described each specified The emotional state of period;
According to the user standard that the emotional state of all specified time sections and customer service are answered in the communication process Exactness, generate service quality evaluation result.
Second aspect, the embodiment of the present invention additionally provide a kind of customer service quality evaluation device, including:
Data obtaining module, in user and customer service communication process, user in real voice to be in each specified time The fundamental frequency information and word speed information of section;
Mood determining module, for according to the fundamental frequency information and word speed information of each specified time section, determining institute respectively State emotional state of the user in each specified time section;
Generation module is evaluated, for the emotional state according to the user all specified time sections in the communication process And the degree of accuracy that customer service is answered, generate service quality evaluation result.
The third aspect, the embodiment of the present invention additionally provide a kind of equipment, including:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are by one or more of computing devices so that one or more of processing Device realizes the customer service quality evaluating method as described in any embodiment of the present invention.
Fourth aspect, the embodiment of the present invention additionally provide a kind of computer-readable recording medium, are stored thereon with computer Program, the customer service quality evaluating method as described in any embodiment of the present invention is realized when the program is executed by processor.
The technical scheme of the embodiment of the present invention, in user and customer service communication process, in real time according to every section of user speech Fundamental frequency information and word speed information determine emotional state of the user in the period, and obtained mood result is more reliable objective;Root The degree of accuracy generation service quality that the emotional state of all specified time sections and customer service are answered in communication process according to user is commented Valency result, it can complete to evaluate at the end of communication based on phonetic feature, obtain comprehensive, objective, accurate service quality evaluation, Customer service can know evaluation result in time, and effectively reference is provided for customer service work adjustment.
Brief description of the drawings
Fig. 1 is the framework and schematic flow sheet of the customer service evaluation of prior art;
Fig. 2 is the flow chart for the customer service quality evaluating method that the embodiment of the present invention one provides;
Fig. 3 is the framework and schematic flow sheet for the customer service quality evaluation that the embodiment of the present invention one provides;
Fig. 4 is the flow chart for the customer service quality evaluating method that the embodiment of the present invention two provides;
Fig. 5 is the flow that user emotion state is determined according to fundamental frequency information and word speed information that the embodiment of the present invention two provides Schematic diagram;
Fig. 6 is the structural representation for the customer service quality evaluation device that the embodiment of the present invention three provides;
Fig. 7 is another structural representation for the customer service quality evaluation device that the embodiment of the present invention three provides;
Fig. 8 is the structural representation for the equipment that the embodiment of the present invention four provides.
Embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention, rather than limitation of the invention.It also should be noted that in order to just Part related to the present invention rather than entire infrastructure are illustrate only in description, accompanying drawing.
Embodiment one
Fig. 2 is the flow chart for the customer service quality evaluating method that the embodiment of the present invention one provides, and the present embodiment is applicable The situation of objective Real-Time Evaluation is carried out in the service quality to customer service, this method can be held by customer service quality evaluation device OK, the device can be realized by software and/or hardware, can typically integrate in the server.As shown in Fig. 2 this method is specific Including:
S210, in user and customer service communication process, fundamental frequency information of the user in real voice in each specified time section And word speed information.
In communication process, user speech can be separated with customer service voices, user speech is parsed, from user's language Fundamental frequency information and word speed information are obtained in sound analysis result.Speech analysis process can be held in customer service quality evaluation device OK;It can also be realized by means of extra speech recognition equipment, known for example, call voice is real-time transmitted to voice by customer service terminal Parsed in other device, customer service quality evaluation device obtains speech analysis result from speech recognition equipment.
The length of specified time section can determine that dialog context classification can be according to voice according to current talking content type Keyword in analysis result determines that dialog context classification includes:Introductions of New Products and answer the question.In practical application In, each dialog context classification and its corresponding time segment length can be prestored, determines to converse according to specific call voice After content type, segment processing is carried out to voice according to corresponding time segment length.For example, for Introductions of New Products, customer service is said Words are relatively more, and user speaks fewer, and so as to which the change of detectable user emotion is also few, time segment length can set length one A bit, such as 3 to 5 minutes;For answering the question, user speaks, and comparison is more, and detectable user emotion changes also more, time segment length Can set it is more slightly shorter, such as 1 minute.In communication process, a variety of dialog context classifications may be related to, therefore once lead to The length of each specified time section is probably change during words.
The fundamental frequency information of specified time section includes:Fundamental frequency average, fundamental frequency maximum and fundamental frequency minimum value etc..Specified time section Word speed information include:Word speed average, starting word speed and end word speed etc..
S220, respectively according to the fundamental frequency information and word speed information of each specified time section, determine the user described The emotional state of each specified time section.
User emotion state in the present embodiment includes:It is pleasant, unhappy and angry, certainly, also can be according to actual conditions More emotional states are subdivided into, the embodiment of the present invention is not restricted to this.The emotional state of user can pass through user speech Feature shows, therefore can preset voice mood rule, based on voice mood rule and the fundamental frequency letter currently obtained Breath, word speed information, to determine user emotion state.Specifically, the three dimensionality model using PAD moods:Pleasant degree (Pleasure) positive-negative characteristics (glad-unhappy) of affective state are represented, activity (Arusal) represents that the nerve of emotion is raw Manage activation level (activation-inactive), dominance (Dominance) represent individual to scene and other people state of a control (master control- It is controlled), in customer service and user's communication process, Primary Reference pleasure degree and activity two indices.The pleasant degree and base of mood Frequently (FO) has significant positive correlation, and the FO averages of certain time are higher, represents user emotion and more tends to happiness, certain time FO averages are lower, represent user emotion tend to it is unhappy.The activity of mood is equal into positive correlation, pleasant and indignation FO with FO It is worth and is higher than other moods, word speed accelerates in short-term under pleasant state, and fundamental frequency improves most under angry state.For each specified time Section, all in accordance with the fundamental frequency information and word speed information of this section of user speech, emotional state of the user in the period is determined, it is such to obtain The mood result arrived is more accurately and reliably.
S230, according to the user, the emotional state of all specified time sections and customer service are answered in the communication process The degree of accuracy, generate service quality evaluation result.
Wherein it is possible to which customer service voices analysis result is contrasted with model answer, the degree of accuracy that customer service is answered is determined, from And obtain the accuracy parameter of multiple specified time sections.When linking up completion, all of this communication process can be integrated and specified The user emotion state of period and client answer the degree of accuracy, obtain this service quality evaluation result.
The technical scheme of the present embodiment, in user and customer service communication process, in real time according to the fundamental frequency of every section of user speech Information and word speed information determine emotional state of the user in the period, and obtained mood result is more reliable objective;According to The degree of accuracy generation service quality evaluation knot that the family emotional state of all specified time sections and customer service in communication process are answered Fruit, it can complete to evaluate at the end of communication based on phonetic feature, obtain comprehensive, objective, accurate service quality evaluation, customer service Evaluation result can be known in time, and effectively reference is provided for customer service work adjustment.
User is obtained after the emotional state of currently assigned period in S220, can be by user in the specified time section Emotional state Real-time Feedback give customer service terminal.Thus customer service can see the current emotional state of user in time, so as to The communication way of oneself is adjusted in time in follow-up link up, and improves the satisfaction of the user.
Fig. 3 is the framework and schematic flow sheet for the customer service quality evaluation that the embodiment of the present invention one provides, such as Fig. 3 institutes Show, user squeezes into phone, and contact staff connects phone, and user is linked up with contact staff, in communication process, based on voice Feature obtains the user emotion state of each specified time section in real time and the degree of accuracy is answered in customer service, and user emotion state is anti-in real time Feed customer service terminal, referred to for customer service;Also, when linking up completion, all specified times of this communication process can be integrated The user emotion state of section and client answer the degree of accuracy, obtain this service quality evaluation result.
Embodiment two
Fig. 4 is the flow chart for the customer service quality evaluating method that the embodiment of the present invention two provides, and the present embodiment is above-mentioned On the basis of embodiment, there is provided determine the preferred embodiment of user's at the appointed time emotional state of section, and according to Family emotional state answers the preferred embodiment of degree of accuracy generation customer service quality evaluation result with customer service.As shown in figure 4, should Method includes:
S410, in user and customer service communication process, fundamental frequency information of the user in real voice in each specified time section And word speed information.
S420, for each specified time section, obtain and base is preset corresponding to the dialog context classification of the specified time section Frequency threshold value and default word speed threshold value.
Wherein, presetting fundamental frequency threshold value includes pitch variation threshold value (i.e. the first fundamental frequency threshold value) and fundamental frequency average threshold value (i.e. the Two fundamental frequency threshold values), default word speed threshold value refers to Speed variation threshold value.Preset fundamental frequency threshold value can be with default word speed threshold value Empirical value, the threshold value corresponding to different industries, different dialog context classifications is probably different, can specifically prestore difference Fundamental frequency threshold value corresponding to each dialog context classification and word speed threshold value, in actual communication process, are obtained according to voice content under industry Take corresponding threshold value.
S430, the first mood result is obtained according to the fundamental frequency information of the specified time section and the default fundamental frequency threshold value, And the second mood result is obtained with the default word speed threshold value according to the word speed information of the specified time section.
Wherein, it is corresponding with word speed information acquisition according to fundamental frequency information respectively according to voice mood set in advance rule Mood result, obtained mood result can be verified mutually, so as to improve the correctness of voice-based mood analysis.
S440, determine the user in the specified time according to the first mood result and the second mood result The emotional state of section.
Wherein, if contradiction is not present with the second mood result in the first mood result, represent what is obtained according to fundamental frequency information Mood result is proved to be successful mutually with the mood result obtained according to word speed information, and obtained user emotion state is correct. If the first mood result and the second mood result contradiction, can choose one of mood result as final user's feelings Not-ready status, it is preferred that can be base frequency parameters and the corresponding weight of word speed parameter setting according to actual conditions, according to base frequency parameters Weight and word speed parameters weighting, the mood of mood result corresponding to the great parameter of right to choose as user's at the appointed time section State.
S450, give the user to customer service terminal in the emotional state Real-time Feedback of the specified time section.
S460, according to the user, the emotional state of all specified time sections and customer service are answered in the communication process The degree of accuracy, generate service quality evaluation result.
Further, first is obtained according to the fundamental frequency information of the specified time section and the default fundamental frequency threshold value in S430 Mood result, including:The pitch variation value of the specified time section is calculated according to the fundamental frequency information of the specified time section;Compare The pitch variation value and the first fundamental frequency threshold value;If the pitch variation value is more than the first fundamental frequency threshold value, it is determined that described User is in the first emotional state in the specified time section.
Wherein, the calculation formula of pitch variation value is:(FOmax- FO)/T, FOmaxRepresent that the fundamental frequency of specified time section is maximum Value, FO represent the fundamental frequency average of specified time section, and T represents the duration of specified time section.Pitch variation value is more than the first fundamental frequency threshold Value, represent that user is many in the pitch variation of the specified time section, according to voice mood rule, determine user in the specified time The mood of section is indignation, i.e. the first emotional state is indignation.If pitch variation value is less than or equal to the first fundamental frequency threshold value, use Family emotional state is undetermined, and the emotional state of user's at the appointed time section is determined by other specification.
Further, first is obtained according to the fundamental frequency information of the specified time section and the default fundamental frequency threshold value in S430 Mood result, including:Compare the fundamental frequency average and the second fundamental frequency threshold value in the fundamental frequency information of the specified time section;It is if described Fundamental frequency average is less than the second fundamental frequency threshold value, determines that the user is in the second emotional state in the specified time section;Such as Fundamental frequency average described in fruit is more than the second fundamental frequency threshold value, determines that the user is in the 3rd mood shape in the specified time section State.
Wherein, the fundamental frequency average under pleasant state is higher, and the fundamental frequency average under unhappy state is relatively low.Second emotional state It is unhappy, the 3rd emotional state is pleasant.Fundamental frequency average Avg (FO) can be by carrying out parsing direct obtain to user speech Arrive, certainly, if it is equal fundamental frequency can be calculated according to fundamental frequency information without fundamental frequency average is included in speech analysis result Value.If fundamental frequency average is equal to the second fundamental frequency threshold value, it can determine that user is in the specified time section according to actual conditions Second emotional state or the 3rd emotional state.
Further, second is obtained according to the word speed information of the specified time section and the default word speed threshold value in S430 Mood result, including:The Speed variation value of the specified time section is calculated according to the word speed information of the specified time section;Compare The Speed variation value and the default word speed threshold value;If the Speed variation value is more than the default word speed threshold value, it is determined that The user is in the 3rd emotional state in the specified time section.
Wherein, the calculation formula of Speed variation value is:(Send- Sstart)/T, SendRepresent the conclusion of specified time section Speed, SstartThe starting word speed of specified time section is represented, T represents the duration of specified time section.Speed variation value is more than default word speed Threshold value, represent that user accelerates in the word speed of the specified time section, according to voice mood rule, determine user in the specified time section Mood for pleasure.If Speed variation value is less than or equal to default word speed threshold value, user emotion state is undetermined, passes through other Parameter determines the emotional state of user's at the appointed time section.
It should be noted that the execution sequence of above-mentioned three kinds of mood determination modes does not differentiate between successively in S430, can also be same Shi Zhihang, comprehensive three results obtain final user emotion state.Exemplary, it is suitable that Fig. 5 shows that S430 one kind is realized Sequence, the emotional state (S431) of user is first determined according to Speed variation, the emotional state of user is determined further according to pitch variation (S432) emotional state (S433) of user, is then determined according to fundamental frequency average, three results is finally integrated and obtains final use Family emotional state.
Further, S460 includes:Obtain mood corresponding to the emotional state of each specified time section in the communication process Fraction, according to adding for the mood fraction of all specified time sections of weight calculation corresponding to the dialog context classification of each specified time section Quan He, obtain the first evaluation score;Accuracy score corresponding to the degree of accuracy of each specified time section in the communication process is obtained, According to the weighting of the accuracy score of all specified time sections of weight calculation corresponding to the dialog context classification of each specified time section With obtain the second evaluation score;According to default emotional parameters weight, default accuracy parameter weight, first evaluation Fraction and second evaluation score, are calculated the service quality evaluation result.
Wherein, every kind of emotional state has corresponding fraction, for example, pleasant state fraction is 90, unhappy state fraction For 60, angry state fraction is 20.The different degrees of accuracy also has corresponding fraction, for example, the degree of accuracy is 90%, then fraction is 90, the degree of accuracy 70%, then fraction is 70.In actual applications, mood fraction and accuracy score can be returned respectively One change is handled, then carries out evaluation score calculating.According to weight corresponding to the dialog context classification of each specified time section, calculate whole The weighted sum of each specified time section user emotion fraction in communication process, as the first evaluation score;According to each specified time section Dialog context classification corresponding to weight, calculate each specified time section customer service in whole communication process answer accuracy score plus Quan He, as the second evaluation score;Then according to default emotional parameters weight and accuracy parameter weight, total evaluation is calculated Score, obtain the final service quality evaluation result of this call.
Obtaining the degree of accuracy of the customer service answer of specified time section includes:Using each specified time section as currently assigned Period, question attributes are extracted from the user speech analysis result of the currently assigned period, and obtain described problem category Model answer corresponding to property;Obtain customer service voices parsing knot corresponding to the user speech analysis result of the currently assigned period Fruit;The matching degree of the answer and the model answer in the customer service voices analysis result is calculated, when obtaining described currently assigned Between section customer service answer the degree of accuracy.
Wherein it is possible to model answer corresponding to customer problem attribute is obtained using machine learning model, to machine learning The problem of input is extracted in model attribute, output obtain model answer corresponding to the question attributes.Establishing machine learning mould During type, Utilizing question attribute sample and model answer sample are trained, and adjust the parameter of grader, can accurate basis Model answer corresponding to question attributes output.Text similarity measurement algorithm can be specifically utilized, such as Jaccard likeness coefficients, Europe Family name's distance etc., to calculate answer matches degree, the degree of accuracy that matching degree is answered as customer service.
The technical scheme of the present embodiment, utilize default voice mood rule, predetermined threshold value, the fundamental frequency information currently obtained And word speed information, obtained user emotion state is more objective, and customer service can see user emotion in time, so as to adjust ditch in time Logical mode, makes client more satisfied.Contrasted according to customer service voices and model answer, can more objectively obtain customer service and answer standard Exactness, synthetic user emotional state and customer service answer the degree of accuracy and obtain comprehensively objectively service quality evaluation result, are customer service work Offer effectively reference is provided.
Embodiment three
Fig. 6 is the structural representation for the customer service quality evaluation device that the embodiment of the present invention three provides, as shown in fig. 6, The device includes:Data obtaining module 610, mood determining module 620 and evaluation generation module 630.
Data obtaining module 610, in user and customer service communication process, user in real voice is when each specified Between section fundamental frequency information and word speed information;
Mood determining module 620, for respectively according to the fundamental frequency information and word speed information of each specified time section, it is determined that Emotional state of the user in each specified time section;
Generation module 630 is evaluated, for the mood according to the user all specified time sections in the communication process The degree of accuracy that state and customer service are answered, generates service quality evaluation result.
Optionally, the mood determining module 620 includes:
Threshold value acquiring unit, for for each specified time section, obtaining the dialog context classification of the specified time section Corresponding default fundamental frequency threshold value and default word speed threshold value;
As a result generation unit, the is obtained for the fundamental frequency information according to the specified time section and the default fundamental frequency threshold value One mood result, and the second mood knot is obtained with the default word speed threshold value according to the word speed information of the specified time section Fruit;
Mood determining unit, for determining that the user exists according to the first mood result and the second mood result The emotional state of the specified time section.
Further, the result generation unit is specifically used for:
The pitch variation value of the specified time section is calculated according to the fundamental frequency information of the specified time section;
Compare the pitch variation value and the first fundamental frequency threshold value;
If the pitch variation value is more than the first fundamental frequency threshold value, determine the user at the specified time section In the first emotional state.
Further, the result generation unit is additionally operable to:
Compare the fundamental frequency average and the second fundamental frequency threshold value in the fundamental frequency information of the specified time section;
If the fundamental frequency average is less than the second fundamental frequency threshold value, determine that the user is in the specified time section Second emotional state;
If the fundamental frequency average is more than the second fundamental frequency threshold value, determine that the user is in the specified time section 3rd emotional state.
Further, the result generation unit is additionally operable to:
The Speed variation value of the specified time section is calculated according to the word speed information of the specified time section;
Compare the Speed variation value and the default word speed threshold value;
If the Speed variation value is more than the default word speed threshold value, determine the user at the specified time section In the 3rd emotional state.
Further, the mood determining unit is specifically used for:In the first mood result and the second mood knot In the case of fruit contradiction, according to default base frequency parameters weight and word speed parameters weighting, corresponding to the great parameter of right to choose Mood result as user the specified time section emotional state.
Optionally, the evaluation generation module 640 includes:
First acquisition unit, for obtaining mood corresponding to the emotional state of each specified time section point in the communication process Number, according to the weighting of the mood fraction of all specified time sections of weight calculation corresponding to the dialog context classification of each specified time section With obtain the first evaluation score;
Second acquisition unit, for obtaining the degree of accuracy corresponding to the degree of accuracy of each specified time section point in the communication process Number, according to adding for the accuracy score of all specified time sections of weight calculation corresponding to the dialog context classification of each specified time section Quan He, obtain the second evaluation score;
Computing unit is evaluated, for according to default emotional parameters weight, default accuracy parameter weight, described first Evaluation score and second evaluation score, are calculated the service quality evaluation result.
As shown in fig. 7, said apparatus can also include:
Emotional feedback module 640, for giving emotional state Real-time Feedback of the user in the currently assigned period to visitor Take terminal;
Model answer acquisition module 650, for using each specified time section as the currently assigned period, from described Question attributes are extracted in the user speech analysis result of currently assigned period, and standard corresponding to obtaining described problem attribute is answered Case;
Analysis result acquisition module 660, the user speech analysis result for obtaining the currently assigned period are corresponding Customer service voices analysis result;
Degree of accuracy computing module 670, for calculating the answer in the customer service voices analysis result and the model answer Matching degree, obtain the degree of accuracy that the currently assigned period customer service is answered.
The customer service quality evaluation device that the embodiment of the present invention is provided can perform any embodiment of the present invention and be provided Customer service quality evaluating method, possess the corresponding functional module of execution method and beneficial effect.It is not detailed in the present embodiment The ins and outs described to the greatest extent, reference can be made to the customer service quality evaluating method that any embodiment of the present invention provides.
Example IV
Fig. 8 is the structural representation for the equipment that the embodiment of the present invention four provides.Fig. 8 is shown suitable for being used for realizing the present invention The block diagram of the example devices 12 of embodiment.The equipment 12 that Fig. 8 is shown is only an example, should not be to the embodiment of the present invention Function and use range bring any restrictions.
As shown in figure 8, equipment 12 is showed in the form of universal computing device.The component of equipment 12 can include but unlimited In:One or more processor or processing unit 16, system storage 28, connection different system component (including system is deposited Reservoir 28 and processing unit 16) bus 18.
Bus 18 represents the one or more in a few class bus structures, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.Lift For example, these architectures include but is not limited to industry standard architecture (ISA) bus, MCA (MAC) Bus, enhanced isa bus, VESA's (VESA) local bus and periphery component interconnection (PCI) bus.
Equipment 12 typically comprises various computing systems computer-readable recording medium.These media can be it is any can be by equipment 12 The usable medium of access, including volatibility and non-volatile media, moveable and immovable medium.
System storage 28 can include the computer system readable media of form of volatile memory, such as arbitrary access Memory (RAM) 30 and/or cache memory 32.Equipment 12 may further include it is other it is removable/nonremovable, Volatile/non-volatile computer system storage medium.Only as an example, storage system 34 can be used for reading and writing irremovable , non-volatile magnetic media (Fig. 8 do not show, commonly referred to as " hard disk drive ").Although not shown in Fig. 8, use can be provided In the disc driver to may move non-volatile magnetic disk (such as " floppy disk ") read-write, and to may move anonvolatile optical disk The CD drive of (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these cases, each driver can To be connected by one or more data media interfaces with bus 18.Memory 28 can include at least one program product, The program product has one group of (for example, at least one) program module, and these program modules are configured to perform each implementation of the invention The function of example.
Program/utility 40 with one group of (at least one) program module 42, such as memory 28 can be stored in In, such program module 42 include but is not limited to operating system, one or more application program, other program modules and Routine data, the realization of network environment may be included in each or certain combination in these examples.Program module 42 is usual Perform the function and/or method in embodiment described in the invention.
Equipment 12 can also communicate with one or more external equipments 14 (such as keyboard, sensing equipment, display 24 etc.), Can also enable a user to the equipment communication interacted with the equipment 12 with one or more, and/or with enable the equipment 12 with Any equipment (such as network interface card, modem etc.) communication that one or more of the other computing device is communicated.It is this logical Letter can be carried out by input/output (I/O) interface 22.Also, equipment 12 can also by network adapter 20 and one or The multiple networks of person (such as LAN (LAN), wide area network (WAN) and/or public network, such as internet) communication.Such as Fig. 8 institutes Show, network adapter 20 is communicated by bus 18 with other modules of equipment 12.It should be understood that although not shown in the drawings, can be with Bonding apparatus 12 uses other hardware and/or software module, includes but is not limited to:Microcode, device driver, redundancy processing are single Member, external disk drive array, RAID system, tape drive and data backup storage system etc..
Processing unit 16 is stored in program in system storage 28 by operation, so as to perform various function application and Data processing, such as realize the customer service quality evaluating method that the embodiment of the present invention is provided.
Embodiment five
The embodiment of the present invention five additionally provides a kind of computer-readable recording medium, is stored thereon with computer program, should The customer service quality evaluating method as described in any embodiment of the present invention is realized when program is executed by processor.
The computer-readable storage medium of the embodiment of the present invention, any of one or more computer-readable media can be used Combination.Computer-readable medium can be computer-readable signal media or computer-readable recording medium.It is computer-readable Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or Device, or any combination above.The more specifically example (non exhaustive list) of computer-readable recording medium includes:Tool There are the electrical connections of one or more wires, portable computer diskette, hard disk, random access memory (RAM), read-only storage (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage Medium can be any includes or the tangible medium of storage program, the program can be commanded execution system, device or device Using or it is in connection.
Computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium beyond storage medium is read, the computer-readable medium, which can send, propagates or transmit, to be used for By instruction execution system, device either device use or program in connection.
The program code included on computer-readable medium can be transmitted with any appropriate medium, including --- but it is unlimited In wireless, electric wire, optical cable, RF etc., or above-mentioned any appropriate combination.
It can be write with one or more programming languages or its combination for performing the computer that operates of the present invention Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, Also include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with Fully perform, partly perform on the user computer on the user computer, the software kit independent as one performs, portion Divide and partly perform or performed completely on remote computer or server on the remote computer on the user computer. Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including LAN (LAN) or Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as carried using Internet service Pass through Internet connection for business).
Pay attention to, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that The invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art various obvious changes, Readjust and substitute without departing from protection scope of the present invention.Therefore, although being carried out by above example to the present invention It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also Other more equivalent embodiments can be included, and the scope of the present invention is determined by scope of the appended claims.

Claims (15)

  1. A kind of 1. customer service quality evaluating method, it is characterised in that including:
    In user and customer service communication process, user in real voice is believed in the fundamental frequency information and word speed of each specified time section Breath;
    Respectively according to the fundamental frequency information and word speed information of each specified time section, determine the user in each specified time The emotional state of section;
    The degree of accuracy of the emotional state of all specified time sections and customer service answer in the communication process according to the user, Generate service quality evaluation result.
  2. 2. according to the method for claim 1, it is characterised in that respectively according to the fundamental frequency information of each specified time section and Word speed information, emotional state of the user in each specified time section is determined, including:
    For each specified time section, obtain and fundamental frequency threshold value and pre- is preset corresponding to the dialog context classification of the specified time section If word speed threshold value;
    First mood result is obtained according to the fundamental frequency information of the specified time section and the default fundamental frequency threshold value, and according to institute The word speed information and the default word speed threshold value for stating specified time section obtain the second mood result;
    Mood of the user in the specified time section is determined according to the first mood result and the second mood result State.
  3. 3. according to the method for claim 2, it is characterised in that according to the fundamental frequency information of the specified time section with it is described pre- If fundamental frequency threshold value obtains the first mood result, including:
    The pitch variation value of the specified time section is calculated according to the fundamental frequency information of the specified time section;
    Compare the pitch variation value and the first fundamental frequency threshold value;
    If the pitch variation value is more than the first fundamental frequency threshold value, determine that the user is in the in the specified time section One emotional state.
  4. 4. according to the method for claim 2, it is characterised in that according to the fundamental frequency information of the specified time section with it is described pre- If fundamental frequency threshold value obtains the first mood result, including:
    Compare the fundamental frequency average and the second fundamental frequency threshold value in the fundamental frequency information of the specified time section;
    If the fundamental frequency average is less than the second fundamental frequency threshold value, determine that the user is in second in the specified time section Emotional state;
    If the fundamental frequency average is more than the second fundamental frequency threshold value, determine that the user is in the 3rd in the specified time section Emotional state.
  5. 5. according to the method for claim 2, it is characterised in that according to the word speed information of the specified time section with it is described pre- If word speed threshold value obtains the second mood result, including:
    The Speed variation value of the specified time section is calculated according to the word speed information of the specified time section;
    Compare the Speed variation value and the default word speed threshold value;
    If the Speed variation value is more than the default word speed threshold value, determine that the user is in the in the specified time section Three emotional states.
  6. 6. according to the method for claim 2, it is characterised in that according to the first mood result and the second mood knot Fruit determines emotional state of the user in the specified time section, including:
    If the first mood result and the second mood result contradiction, according to default base frequency parameters weight and word speed Parameters weighting, mood result corresponding to the great parameter of right to choose as user the specified time section emotional state.
  7. 7. according to the method for claim 1, it is characterised in that specified according to the user is all in the communication process The degree of accuracy that the emotional state of period and customer service are answered, service quality evaluation result is generated, including:
    Mood fraction corresponding to the emotional state of each specified time section in the communication process is obtained, according to each specified time section The weighted sum of the mood fraction of all specified time sections of weight calculation corresponding to dialog context classification, obtains the first evaluation score;
    Accuracy score corresponding to the degree of accuracy of each specified time section in the communication process is obtained, according to each specified time section The weighted sum of the accuracy score of all specified time sections of weight calculation corresponding to dialog context classification, obtain the second evaluation point Number;
    According to default emotional parameters weight, default accuracy parameter weight, first evaluation score and described second Evaluation score, the service quality evaluation result is calculated.
  8. 8. according to the method for claim 1, it is characterised in that according to the user in the communication process all fingers The degree of accuracy that the emotional state for section of fixing time and customer service are answered, before generating service quality evaluation result, methods described is also wrapped Include:
    Using each specified time section as the currently assigned period, parsed from the user speech of the currently assigned period As a result middle extraction question attributes, and obtain model answer corresponding to described problem attribute;
    Obtain customer service voices analysis result corresponding to the user speech analysis result of the currently assigned period;
    The matching degree of the answer and the model answer in the customer service voices analysis result is calculated, when obtaining described currently assigned Between section customer service answer the degree of accuracy.
  9. 9. according to any described method in claim 1 to 8, it is characterised in that respectively according to each specified time section Fundamental frequency information and word speed information, determine the user after the emotional state of each specified time section, methods described is also Including:
    Give emotional state Real-time Feedback of the user in the currently assigned period to customer service terminal.
  10. A kind of 10. customer service quality evaluation device, it is characterised in that including:
    Data obtaining module, in user and customer service communication process, user in real voice to be in each specified time section Fundamental frequency information and word speed information;
    Mood determining module, for according to the fundamental frequency information and word speed information of each specified time section, determining the use respectively Emotional state of the family in each specified time section;
    Evaluate generation module, for according to the user in the communication process emotional state of all specified time sections and The degree of accuracy that customer service is answered, generate service quality evaluation result.
  11. 11. device according to claim 10, it is characterised in that the mood determining module includes:
    Threshold value acquiring unit, for for each specified time section, the dialog context classification for obtaining the specified time section to be corresponding Default fundamental frequency threshold value and default word speed threshold value;
    As a result generation unit, the first feelings are obtained for the fundamental frequency information according to the specified time section and the default fundamental frequency threshold value Thread result, and the second mood result is obtained with the default word speed threshold value according to the word speed information of the specified time section;
    Mood determining unit, for determining the user described according to the first mood result and the second mood result The emotional state of specified time section.
  12. 12. device according to claim 11, it is characterised in that the result generation unit is specifically used for:
    The pitch variation value of the specified time section is calculated according to the fundamental frequency information of the specified time section;Compare the fundamental frequency to become Change value and the first fundamental frequency threshold value;If the pitch variation value is more than the first fundamental frequency threshold value, determine the user described Specified time section is in the first emotional state;
    Compare the fundamental frequency average and the second fundamental frequency threshold value in the fundamental frequency information of the specified time section;If the fundamental frequency average is small In the second fundamental frequency threshold value, determine that the user is in the second emotional state in the specified time section;If the fundamental frequency Average is more than the second fundamental frequency threshold value, determines that the user is in the 3rd emotional state in the specified time section;
    The Speed variation value of the specified time section is calculated according to the word speed information of the specified time section;Compare the word speed to become Change value and the default word speed threshold value;If the Speed variation value is more than the default word speed threshold value, determine that the user exists The specified time section is in the 3rd emotional state.
  13. 13. device according to claim 10, it is characterised in that the evaluation generation module includes:
    First acquisition unit, for obtaining mood fraction corresponding to the emotional state of each specified time section in the communication process, According to the weighted sum of the mood fraction of all specified time sections of weight calculation corresponding to the dialog context classification of each specified time section, Obtain the first evaluation score;
    Second acquisition unit, for obtaining accuracy score corresponding to the degree of accuracy of each specified time section in the communication process, According to the weighting of the accuracy score of all specified time sections of weight calculation corresponding to the dialog context classification of each specified time section With obtain the second evaluation score;
    Computing unit is evaluated, for according to default emotional parameters weight, default accuracy parameter weight, first evaluation Fraction and second evaluation score, are calculated the service quality evaluation result.
  14. 14. a kind of equipment, it is characterised in that the equipment includes:
    One or more processors;
    Storage device, for storing one or more programs;
    When one or more of programs are by one or more of computing devices so that one or more of processors are real The now customer service quality evaluating method as described in any in claim 1 to 9.
  15. 15. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor The customer service quality evaluating method as described in any in claim 1 to 9 is realized during execution.
CN201710984748.3A 2017-10-20 2017-10-20 Customer service quality evaluation method, device, equipment and storage medium Active CN107818798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710984748.3A CN107818798B (en) 2017-10-20 2017-10-20 Customer service quality evaluation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710984748.3A CN107818798B (en) 2017-10-20 2017-10-20 Customer service quality evaluation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107818798A true CN107818798A (en) 2018-03-20
CN107818798B CN107818798B (en) 2020-08-18

Family

ID=61608520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710984748.3A Active CN107818798B (en) 2017-10-20 2017-10-20 Customer service quality evaluation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107818798B (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197963A (en) * 2018-03-28 2018-06-22 广州市菲玛尔咨询服务有限公司 A kind of intelligent customer service manages system
CN108962281A (en) * 2018-08-15 2018-12-07 三星电子(中国)研发中心 A kind of evaluation of language expression and householder method and device
CN108962282A (en) * 2018-06-19 2018-12-07 京北方信息技术股份有限公司 Speech detection analysis method, apparatus, computer equipment and storage medium
CN109033257A (en) * 2018-07-06 2018-12-18 中国平安人寿保险股份有限公司 Talk about art recommended method, device, computer equipment and storage medium
CN109242529A (en) * 2018-07-31 2019-01-18 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and the online method of investigation and study of user experience based on scene analysis
CN109327632A (en) * 2018-11-23 2019-02-12 深圳前海微众银行股份有限公司 Intelligent quality inspection system, method and the computer readable storage medium of customer service recording
CN109408756A (en) * 2018-09-21 2019-03-01 广州神马移动信息科技有限公司 The monitoring method and its device of user behavior in Ask-Answer Community
CN109726655A (en) * 2018-12-19 2019-05-07 平安普惠企业管理有限公司 Customer service evaluation method, device, medium and equipment based on Emotion identification
CN109785862A (en) * 2019-01-21 2019-05-21 深圳壹账通智能科技有限公司 Customer service quality evaluating method, device, electronic equipment and storage medium
CN109816220A (en) * 2019-01-07 2019-05-28 平安科技(深圳)有限公司 Quality of service monitoring and treating method and apparatus based on intelligent decision
CN109902938A (en) * 2019-01-31 2019-06-18 平安科技(深圳)有限公司 Obtain method, apparatus, computer equipment and the storage medium of learning materials
CN110147936A (en) * 2019-04-19 2019-08-20 深圳壹账通智能科技有限公司 Service evaluation method, apparatus based on Emotion identification, storage medium
CN110288214A (en) * 2019-06-14 2019-09-27 秒针信息技术有限公司 The method and device of partition of the level
CN110363154A (en) * 2019-07-17 2019-10-22 安徽航天信息有限公司 A kind of service quality examining method and system based on Emotion identification
CN110718228A (en) * 2019-10-22 2020-01-21 中信银行股份有限公司 Voice separation method and device, electronic equipment and computer readable storage medium
CN110827796A (en) * 2019-09-23 2020-02-21 平安科技(深圳)有限公司 Interviewer determination method and device based on voice, terminal and storage medium
CN111026793A (en) * 2019-11-25 2020-04-17 珠海格力电器股份有限公司 Data processing method, device, medium and equipment
CN111080109A (en) * 2019-12-06 2020-04-28 中信银行股份有限公司 Customer service quality evaluation method and device and electronic equipment
CN111199158A (en) * 2019-12-30 2020-05-26 沈阳民航东北凯亚有限公司 Method and device for scoring civil aviation customer service
CN111242508A (en) * 2020-02-14 2020-06-05 厦门快商通科技股份有限公司 Method, device and equipment for evaluating customer service quality based on natural language processing
CN111311327A (en) * 2020-02-19 2020-06-19 平安科技(深圳)有限公司 Service evaluation method, device, equipment and storage medium based on artificial intelligence
CN111353804A (en) * 2018-12-24 2020-06-30 中移(杭州)信息技术有限公司 Service evaluation method, device, terminal equipment and medium
CN111832851A (en) * 2019-04-15 2020-10-27 北京嘀嘀无限科技发展有限公司 Detection method and device
CN112040074A (en) * 2020-08-24 2020-12-04 华院数据技术(上海)有限公司 Professional burnout detection method for telephone customer service staff based on voice acoustic information
CN112131369A (en) * 2020-09-29 2020-12-25 中国银行股份有限公司 Service class determination method and device
CN112132477A (en) * 2020-09-28 2020-12-25 中国银行股份有限公司 Service performance determination method and device
CN112232101A (en) * 2019-07-15 2021-01-15 北京正和思齐数据科技有限公司 User communication state evaluation method, device and system
CN112308591A (en) * 2019-08-02 2021-02-02 中国移动通信有限公司研究院 User evaluation method, device, equipment and computer readable storage medium
CN112364661A (en) * 2020-11-11 2021-02-12 北京大米科技有限公司 Data detection method and device, readable storage medium and electronic equipment
CN112671984A (en) * 2020-12-01 2021-04-16 长沙市到家悠享网络科技有限公司 Service mode switching method and device, robot customer service and storage medium
CN112863547A (en) * 2018-10-23 2021-05-28 腾讯科技(深圳)有限公司 Virtual resource transfer processing method, device, storage medium and computer equipment
CN112885376A (en) * 2021-01-23 2021-06-01 深圳通联金融网络科技服务有限公司 Method and device for improving voice call quality inspection effect
CN113011158A (en) * 2021-03-23 2021-06-22 北京百度网讯科技有限公司 Information anomaly detection method and device, electronic equipment and storage medium
CN113472958A (en) * 2021-07-13 2021-10-01 上海华客信息科技有限公司 Method, system, electronic device and storage medium for receiving branch telephone in centralized mode
CN113674765A (en) * 2021-08-18 2021-11-19 中国联合网络通信集团有限公司 Voice customer service quality inspection method, device, equipment and storage medium
CN114049973A (en) * 2021-11-15 2022-02-15 阿里巴巴(中国)有限公司 Dialogue quality inspection method, electronic device, computer storage medium, and program product
CN114519596A (en) * 2020-11-18 2022-05-20 中国移动通信有限公司研究院 Data processing method, device and equipment
TWI815400B (en) * 2022-04-14 2023-09-11 合作金庫商業銀行股份有限公司 Emotion analysis system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis
WO2015174628A1 (en) * 2014-05-12 2015-11-19 주식회사 네이블커뮤니케이션즈 Volte quality measuring system and volte quality measuring method utilizing terminal agent
CN107093431A (en) * 2016-02-18 2017-08-25 中国移动通信集团辽宁有限公司 A kind of method and device that quality inspection is carried out to service quality
CN107154257A (en) * 2017-04-18 2017-09-12 苏州工业职业技术学院 Customer service quality evaluating method and system based on customer voice emotion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis
WO2015174628A1 (en) * 2014-05-12 2015-11-19 주식회사 네이블커뮤니케이션즈 Volte quality measuring system and volte quality measuring method utilizing terminal agent
CN107093431A (en) * 2016-02-18 2017-08-25 中国移动通信集团辽宁有限公司 A kind of method and device that quality inspection is carried out to service quality
CN107154257A (en) * 2017-04-18 2017-09-12 苏州工业职业技术学院 Customer service quality evaluating method and system based on customer voice emotion

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197963A (en) * 2018-03-28 2018-06-22 广州市菲玛尔咨询服务有限公司 A kind of intelligent customer service manages system
CN108962282A (en) * 2018-06-19 2018-12-07 京北方信息技术股份有限公司 Speech detection analysis method, apparatus, computer equipment and storage medium
CN109033257A (en) * 2018-07-06 2018-12-18 中国平安人寿保险股份有限公司 Talk about art recommended method, device, computer equipment and storage medium
CN109242529A (en) * 2018-07-31 2019-01-18 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and the online method of investigation and study of user experience based on scene analysis
CN108962281B (en) * 2018-08-15 2021-05-07 三星电子(中国)研发中心 Language expression evaluation and auxiliary method and device
CN108962281A (en) * 2018-08-15 2018-12-07 三星电子(中国)研发中心 A kind of evaluation of language expression and householder method and device
CN109408756A (en) * 2018-09-21 2019-03-01 广州神马移动信息科技有限公司 The monitoring method and its device of user behavior in Ask-Answer Community
CN112863547B (en) * 2018-10-23 2022-11-29 腾讯科技(深圳)有限公司 Virtual resource transfer processing method, device, storage medium and computer equipment
CN112863547A (en) * 2018-10-23 2021-05-28 腾讯科技(深圳)有限公司 Virtual resource transfer processing method, device, storage medium and computer equipment
CN109327632A (en) * 2018-11-23 2019-02-12 深圳前海微众银行股份有限公司 Intelligent quality inspection system, method and the computer readable storage medium of customer service recording
CN109726655A (en) * 2018-12-19 2019-05-07 平安普惠企业管理有限公司 Customer service evaluation method, device, medium and equipment based on Emotion identification
CN111353804A (en) * 2018-12-24 2020-06-30 中移(杭州)信息技术有限公司 Service evaluation method, device, terminal equipment and medium
CN109816220A (en) * 2019-01-07 2019-05-28 平安科技(深圳)有限公司 Quality of service monitoring and treating method and apparatus based on intelligent decision
CN109785862A (en) * 2019-01-21 2019-05-21 深圳壹账通智能科技有限公司 Customer service quality evaluating method, device, electronic equipment and storage medium
CN109902938A (en) * 2019-01-31 2019-06-18 平安科技(深圳)有限公司 Obtain method, apparatus, computer equipment and the storage medium of learning materials
CN111832851B (en) * 2019-04-15 2024-03-29 北京嘀嘀无限科技发展有限公司 Detection method and device
CN111832851A (en) * 2019-04-15 2020-10-27 北京嘀嘀无限科技发展有限公司 Detection method and device
CN110147936A (en) * 2019-04-19 2019-08-20 深圳壹账通智能科技有限公司 Service evaluation method, apparatus based on Emotion identification, storage medium
CN110288214A (en) * 2019-06-14 2019-09-27 秒针信息技术有限公司 The method and device of partition of the level
CN112232101A (en) * 2019-07-15 2021-01-15 北京正和思齐数据科技有限公司 User communication state evaluation method, device and system
CN110363154A (en) * 2019-07-17 2019-10-22 安徽航天信息有限公司 A kind of service quality examining method and system based on Emotion identification
CN112308591A (en) * 2019-08-02 2021-02-02 中国移动通信有限公司研究院 User evaluation method, device, equipment and computer readable storage medium
CN110827796A (en) * 2019-09-23 2020-02-21 平安科技(深圳)有限公司 Interviewer determination method and device based on voice, terminal and storage medium
CN110827796B (en) * 2019-09-23 2024-05-24 平安科技(深圳)有限公司 Interviewer judging method and device based on voice, terminal and storage medium
CN110718228A (en) * 2019-10-22 2020-01-21 中信银行股份有限公司 Voice separation method and device, electronic equipment and computer readable storage medium
CN111026793A (en) * 2019-11-25 2020-04-17 珠海格力电器股份有限公司 Data processing method, device, medium and equipment
CN111080109A (en) * 2019-12-06 2020-04-28 中信银行股份有限公司 Customer service quality evaluation method and device and electronic equipment
CN111080109B (en) * 2019-12-06 2023-05-05 中信银行股份有限公司 Customer service quality evaluation method and device and electronic equipment
CN111199158A (en) * 2019-12-30 2020-05-26 沈阳民航东北凯亚有限公司 Method and device for scoring civil aviation customer service
CN111242508A (en) * 2020-02-14 2020-06-05 厦门快商通科技股份有限公司 Method, device and equipment for evaluating customer service quality based on natural language processing
CN111311327A (en) * 2020-02-19 2020-06-19 平安科技(深圳)有限公司 Service evaluation method, device, equipment and storage medium based on artificial intelligence
CN112040074A (en) * 2020-08-24 2020-12-04 华院数据技术(上海)有限公司 Professional burnout detection method for telephone customer service staff based on voice acoustic information
CN112132477A (en) * 2020-09-28 2020-12-25 中国银行股份有限公司 Service performance determination method and device
CN112131369A (en) * 2020-09-29 2020-12-25 中国银行股份有限公司 Service class determination method and device
CN112131369B (en) * 2020-09-29 2024-02-02 中国银行股份有限公司 Service class determining method and device
CN112364661B (en) * 2020-11-11 2024-03-19 北京大米科技有限公司 Data detection method and device, readable storage medium and electronic equipment
CN112364661A (en) * 2020-11-11 2021-02-12 北京大米科技有限公司 Data detection method and device, readable storage medium and electronic equipment
CN114519596A (en) * 2020-11-18 2022-05-20 中国移动通信有限公司研究院 Data processing method, device and equipment
CN112671984A (en) * 2020-12-01 2021-04-16 长沙市到家悠享网络科技有限公司 Service mode switching method and device, robot customer service and storage medium
CN112885376A (en) * 2021-01-23 2021-06-01 深圳通联金融网络科技服务有限公司 Method and device for improving voice call quality inspection effect
CN113011158A (en) * 2021-03-23 2021-06-22 北京百度网讯科技有限公司 Information anomaly detection method and device, electronic equipment and storage medium
CN113472958A (en) * 2021-07-13 2021-10-01 上海华客信息科技有限公司 Method, system, electronic device and storage medium for receiving branch telephone in centralized mode
CN113674765A (en) * 2021-08-18 2021-11-19 中国联合网络通信集团有限公司 Voice customer service quality inspection method, device, equipment and storage medium
CN114049973A (en) * 2021-11-15 2022-02-15 阿里巴巴(中国)有限公司 Dialogue quality inspection method, electronic device, computer storage medium, and program product
TWI815400B (en) * 2022-04-14 2023-09-11 合作金庫商業銀行股份有限公司 Emotion analysis system

Also Published As

Publication number Publication date
CN107818798B (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN107818798A (en) Customer service quality evaluating method, device, equipment and storage medium
US11380327B2 (en) Speech communication system and method with human-machine coordination
US10586539B2 (en) In-call virtual assistant
US10057419B2 (en) Intelligent call screening
US10044864B2 (en) Computer-implemented system and method for assigning call agents to callers
US20200220975A1 (en) Personalized support routing based on paralinguistic information
US20190253558A1 (en) System and method to automatically monitor service level agreement compliance in call centers
US10592611B2 (en) System for automatic extraction of structure from spoken conversation using lexical and acoustic features
US10750018B2 (en) Modeling voice calls to improve an outcome of a call between a representative and a customer
WO2021051506A1 (en) Voice interaction method and apparatus, computer device and storage medium
CN109767765A (en) Talk about art matching process and device, storage medium, computer equipment
JP6341092B2 (en) Expression classification device, expression classification method, dissatisfaction detection device, and dissatisfaction detection method
WO2014069076A1 (en) Conversation analysis device and conversation analysis method
CN105723360A (en) Improving natural language interactions using emotional modulation
CN105744090A (en) Voice information processing method and device
US12118978B2 (en) Systems and methods for generating synthesized speech responses to voice inputs indicative of a user in a hurry
CN110570853A (en) Intention recognition method and device based on voice data
CN110633912A (en) Method and system for monitoring service quality of service personnel
KR20190117840A (en) Method and computer readable recording medium for, during a customer consulting by a conversation understanding ai system, passing responsibility of proceeding with subsequent customer consulting to a human consultant
CN114328867A (en) Intelligent interruption method and device in man-machine conversation
CN114138960A (en) User intention identification method, device, equipment and medium
CN101460994A (en) Speech differentiation
EP4093005A1 (en) System method and apparatus for combining words and behaviors
KR20200122916A (en) Dialogue system and method for controlling the same
CN115831125A (en) Speech recognition method, device, equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant