Disclosure of Invention
To overcome the problems in the related art, the present specification provides an image generation method, apparatus, and device.
A method of image generation, the method comprising:
editing a first image drawn by using source data according to an editing instruction in an image editing state to obtain editing data;
after receiving a confirmation generation instruction, drawing a second image based on the source data and the edit data;
and the drawing parameter values adopted for drawing the first image and the second image are different, so that the sizes of the two images are inconsistent.
Optionally, the drawing parameter values include: one or more of a display area size of the image, a font size, a word spacing, a graphic scaling, a graphic spacing, a frame size of the editing data.
Optionally, a value of a rendering parameter for rendering the first image is determined based on a resolution of a device performing the method; or the like, or, alternatively,
a drawing parameter value for drawing the first image is determined based on a first parameter setting instruction; or the like, or, alternatively,
drawing a drawing parameter value of the second image to be a preset value; or the like, or, alternatively,
a drawing parameter value for drawing the second image is determined based on a second parameter setting instruction.
Optionally, the first/second image comprises a trend graph of the product object.
Optionally, the editing data includes editing content and position information used for determining a position of the editing content in the drawing board, and the drawing of the second image based on the editing data includes:
and determining the position of the editing content in the drawing board by using the position information in the editing data, and drawing the editing content at the position of the drawing board.
Optionally, the size of the first image is larger than the size of the second image.
Optionally, the method further includes:
and sending the image information carrying the second image to a server, wherein the image information is used for indicating the server to generate a URL of the second image and sending the URL to a target terminal indicated by the image information.
An image generation apparatus, the apparatus comprising:
the editing module is used for editing a first image drawn by using the source data according to an editing instruction in an image editing state to obtain editing data;
the drawing module is used for drawing a second image based on the source data and the editing data after receiving a confirmation generation instruction;
and the drawing parameter values adopted for drawing the first image and the second image are different, so that the sizes of the two images are inconsistent.
Optionally, the drawing parameter values include: one or more of a display area size of the image, a font size, a word spacing, a graphic scaling, a graphic spacing, a frame size of the editing data.
Optionally, a rendering parameter value for rendering the first image is determined based on a resolution of a device executing the apparatus; or the like, or, alternatively,
a drawing parameter value for drawing the first image is determined based on a first parameter setting instruction; or the like, or, alternatively,
drawing a drawing parameter value of the second image to be a preset value; or the like, or, alternatively,
a drawing parameter value for drawing the second image is determined based on a second parameter setting instruction.
Optionally, the editing data includes editing content and position information used for determining a position of the editing content in the drawing board, and the drawing module is specifically configured to:
and determining the position of the editing content in the drawing board by using the position information in the editing data, and drawing the editing content at the position of the drawing board.
Optionally, the apparatus further comprises:
and the information sending module is used for sending the image information carrying the second image to a server, wherein the image information is used for indicating the server to generate a URL of the second image and sending the URL to a target terminal indicated by the image information.
A computer device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
editing a first image drawn by using source data according to an editing instruction in an image editing state to obtain editing data;
after receiving a confirmation generation instruction, drawing a second image based on the source data and the edit data;
and the drawing parameter values adopted for drawing the first image and the second image are different, so that the sizes of the two images are inconsistent.
The technical scheme provided by the embodiment of the specification can have the following beneficial effects:
in the embodiment of the present description, in an image editing state, a first image drawn by using source data may be edited according to an editing instruction to obtain editing data, and after a confirmation generation instruction is received, a second image may be drawn based on the source data and the editing data.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
With the development of information technology, information graphical technology has been widely used, by which some information can be converted into two-dimensional line graphs (such as line graphs, wave graphs, trend graphs, etc.) so as to analyze the information. For example: converting the turnover information of each quarter of the enterprise into a trend graph; converting the personnel information of all age groups obtained by statistics into an age distribution map; and converting into a stock K-line graph according to the operation information of the stocks, and the like.
Taking a financial application program as an example, a product object can be added based on a user instruction in a scene of exchanging or issuing information and the like, and the added product object is displayed in the form of an image. For example, the user may add a trend graph of products (e.g., stocks, funds, etc.) of interest to the current discussion, viewpoint, and answer, and may also be illustrated in the form of labels on the trend graph. In the trend chart editing stage, the trend chart can be added, and the operation such as label editing can be performed on the trend chart. It will be appreciated that during the spark line editing phase, the user may autonomously select whether to add a tag. The trend graph may be a technical graph in which trading information such as stock market or futures market is displayed on a graph by a curve or a K-line. In one example, the horizontal axis of the coordinates may be a time period, the upper half of the vertical axis may be a stock price or index for the time period, and the lower half may be displayed as volume. The label may be a written description of the trend graph.
In the editing stage, in order to facilitate a user to clearly see the full view of the image, the image with a larger size can be displayed by using a larger display area, so that the user can quickly select an area which needs to be added with the editing information such as a label. After receiving the confirmation generation instruction, in order to improve the display effect and meet the transmission requirement or the storage requirement, an image with a smaller size is required. It can be seen that two sizes of images are involved in the process of adding a product object. The image source data of the two sizes are the same, the editing data are the same, but the display proportion is different. If a large-size image is reduced to a small-size image in a simple manner of scaling the image according to the aspect ratio, the image may be distorted, and displayed objects in the image may not be coordinated. In particular, when compressing, one direction is often compressed, and the other direction is not adjusted or is adjusted to be smaller, and the image distortion is more serious.
In order to provide an image generation scheme with a better visual effect, in an image editing state, the embodiment of the present specification may edit a first image drawn by using source data according to an editing instruction to obtain editing data, and may draw a second image based on the source data and the editing data after receiving a generation confirmation instruction.
The description can be applied to a device which needs to generate an image, and the device can be a client device or a server device. In one scenario, a client device may obtain source data for rendering an image from a server device and render the image. The client devices may be various electronic devices that may render images, which may be handheld electronic devices or other electronic devices. For example, it may be a cellular phone, media player or other hand-held portable device, a slightly smaller portable device such as a wristwatch device, pendant device or other wearable or miniaturized device, gaming equipment, tablet computer, notebook computer, desktop computer, television, computer integrated into a computer display, or other electronic equipment. The server device may be a server or a cluster of servers.
As shown in fig. 1A, fig. 1A is a flow chart of an image generation method shown in the present specification according to an exemplary embodiment, which may include the steps of:
in step 101, in an image editing state, a first image rendered by source data is edited according to an editing instruction, and editing data is obtained.
In step 102, upon receiving a confirmation generation instruction, a second image is rendered based on the source data and the edit data.
And the drawing parameter values adopted for drawing the first image and the second image are different, so that the sizes of the two images are inconsistent.
In the image editing state, a first image can be generated and edited. The source data may be data used to render a first image, which may include a two-dimensional line graph such as a line graph, a wave graph, or a spark line graph. For example, if the first image is an age profile, the source data may be source data used to map the age profile; as another example, the first image may include a trend map of the product object. If the first image is a spark line of a product object such as a stock/fund, the source data may be source data for plotting a spark line of the stock/fund.
In the image editing state, the first image of the object may be added according to the object addition instruction. For example, a trend graph of a certain product may be added in a discussion scenario, a publishing point of view scenario, and the like, and when a product trend graph is added, a product and a time period may be selected, and different source data may be acquired based on different time periods.
In the image editing state, the first image may be edited, for example, a text description in the form of a label may be included, the text description may include information explaining the first image, information such as a point of sale and a point of sale marked on the first image, or other information that needs to be noted, which is not listed here. An editing instruction can be generated based on the triggering operation of the user, and after the editing operation is performed on the first image according to the editing instruction, editing data can be obtained. Wherein the editing operation can be performed using the editing data.
In one example, the editing data may include not only the displayed editing content but also position information for determining a position of the editing content in the drawing board. Wherein the drawing board may be a drawing board for drawing the second image. In view of this, when the second image is drawn based on the edit data, the position of the edit content in the drawing board may be determined using the position information in the edit data, and the edit content may be drawn at the position of the drawing board. For example, the base image may be drawn based on the source data, and then the position information in the editing data may be used to determine the position of the editing content in the base image, and further draw the editing content at the position of the base image.
As an example, the position information may include coordinate information in the two-dimensional line drawing, and for example, when the lateral coordinate system in the two-dimensional line drawing represents time, the position information may include time information corresponding to the editing content so that the drawing position of the editing content is determined based on the time information when the second drawing is drawn. For example, if the edited content is tag information for a certain histogram, the time information of the histogram may be determined as the time information of the tag information, so that the tag information is drawn at a position corresponding to the time information when the second graph is drawn. Therefore, the mapping of the edit data in the first graph and the second graph is performed through the time information, so as to avoid the problem of position error when the label of the second graph is drawn.
In some examples, the position information may include both abscissa information and ordinate information, so that the position of the edited content is determined according to the abscissa information and the ordinate information, and redrawing and restoring of the edited content is achieved.
The position information may be one or more influencing factors for determining the position of the edited content in the drawing board, and in order to determine the position of the edited content in the drawing board, other influencing factors may also be included, for example, a graphic frame showing the edited content extends leftwards or rightwards with a time point as a starting point, which is not listed here. As an example, the other influencing factors may be previously agreed in a drawing policy for drawing the second image.
After the user finishes editing, a confirmation generation instruction can be triggered and generated by clicking a confirmation button and the like. The execution end may draw the second image based on the source data and the edit data after receiving the confirmation generation instruction. When the first image and the second image are drawn, the same drawing strategy can be adopted, but different drawing parameter values are adopted, so that the sizes of the first image and the second image obtained by drawing are not consistent.
Regarding the size of the first image and the size of the second image, in some scenes, the first image may be edited and then displayed, and in view of this, as an example, the size of the first image is larger than the size of the second image, so that the first image with a larger size is provided in the image editing state, which is convenient for the user to clearly see the full view of the image, and the second image with a smaller size is provided in the final confirmation stage, so as to improve the display effect and meet the transmission requirement. It will be appreciated that in other application scenarios, the size of the first image may be smaller than the size of the second image.
The first image can be provided with a corresponding first drawing parameter value set (at least comprising one drawing parameter value), the second image can be provided with a corresponding second drawing parameter value set (at least comprising one drawing parameter value), the first image and the second image have different values when drawn according to the same drawing parameter, and the main purpose is to make the sizes of the two images inconsistent and achieve better display effect under the respective sizes. In view of this, the first set of rendering parameter values are parameter values determined for adapting to the size of the first image and the second set of rendering parameter values are parameter values determined for adapting to the size of the second image. And the second image obtained by drawing the parameter values in the second drawing parameter value set is adopted, and compared with the first image obtained by drawing the parameter values in the first drawing parameter value set, the refined adjustment can be realized based on the inconsistency of the drawing parameter values.
The drawing parameter value may include a size of a display area of the image, or may include an attribute value of each object (primitive) in the image, for example, a size of the primitive, a distance between the primitives, and the like. The image may include primitives such as characters and graphics, and the characters may be independent characters or characters inside the label. For example, as shown in FIG. 1B, the information of related stocks/funds above the trend chart: "upper syndrome index", "1 a0001. sh", "3016.53", etc., label internal characters: "buy", "sell", etc. The graph may be a graph for describing information such as stock price at each time point, and as shown in fig. 1B, stock price at each time point is described using a bar graph. Further, the editing content in the editing data may also be displayed in the graphic frame to distinguish the editing content from the base image.
As an example, the rendering parameter values may include: font size, word spacing, graphics scaling, graphics spacing, bezel size of the editing data, etc.
Aiming at the mapping of characters in the first image and the second image and aiming at different image size requirements, the character size suitable for the image with the corresponding size is set so as to display the characters with the character size in the image with the size, the condition that the characters are deformed due to forced compression can be avoided, and the display effect of the image is better. The characters at different positions may be the same or different in font size. For example, the size of the characters inside the label may be different from the size of the characters outside.
Aiming at the mapping of the characters in the first image and the second image and aiming at different image size requirements, the word space suitable for the image with the corresponding size is set, so that the display effect of the image can be better by displaying the characters with the word space in the image with the size. For example, the defects that the character spacing is extremely reduced and the adjacent character spacing is not obvious due to forced compression can be avoided. Or the phenomenon that the distance between the directions with strong compression force is obviously shortened and the distance between the directions without compression or with weak compression force is still large because only one direction is compressed (or the compression force in one direction is stronger than that in the other direction) can be avoided.
Aiming at the mapping of the graphics in the first image and the second image and aiming at different image size requirements, the scaling of the graphics suitable for the image with the corresponding size is set so as to scale the graphics in the image with the size based on the scaling of the graphics, thereby realizing the adaptive adjustment of the graphics.
Aiming at the mapping of the graphs in the first image and the second image and aiming at different image size requirements, the graph spacing suitable for the corresponding size image is set, so that the graph displaying the graph spacing in the size image can enable the image displaying effect to be better. For example, it is possible to avoid the defect that the pattern pitch is extremely reduced due to the forced compression and the adjacent pattern pitch is not obvious. Or the phenomenon that the distance between the directions with strong compression force is obviously shortened and the distance between the directions without compression or with weak compression force is still large because only one direction is compressed (or the compression force in one direction is stronger than that in the other direction) can be avoided.
Aiming at the mapping of the editing data in the first image and the second image and aiming at different image size requirements, the frame size suitable for the image with the corresponding size is set so as to display the frame with the size of the frame in the image with the size, and the situation that the frame covers other information and the like can be avoided. For example, avoiding the border of the label from covering other information.
It can be understood that, in order to achieve better realization of images with different sizes, other fine adjustments may be performed on attributes of primitives in the images, which are not listed here.
As shown in fig. 1B, fig. 1B is an effect diagram of a first image and a second image shown in the present specification according to an exemplary embodiment. In an example, a first image may be displayed in an interface for adding a product, in which product information is displayed in a manner of day K based on a drawing parameter value suitable for a size of the first image, the first image is an editable image, and tag information such as "buy", "sell", and the like is added. And drawing the second image according to the drawing parameter value suitable for the size of the second image in the second image so as to realize the adaptive adjustment of each attribute value of the pixel in the second image, so that the second image achieves a better display effect, and the phenomena of character deformation, inconsistent space and the like are avoided. In the example, the second image is displayed in the interface for initiating the discussion, and after the supplementary explanation is added, the information can be sent to the target terminal through the server by clicking the sending control. It can be seen that in the second diagram, the character deformation and the like do not occur.
As for the manner of determination of the drawing parameter value in the first image, as an example, the drawing parameter value of the first image may be determined based on the first parameter setting instruction. The first parameter setting instruction may be an instruction generated based on a trigger operation by a user. For example, in a preset parameter setting interface, the user may set a drawing parameter value in the first image. As another example, respective sets of parameter values are configured for different resolutions, and the set of parameter values corresponding to the resolution of the apparatus is acquired, thereby obtaining values of rendering parameters of the first image. Among them, the apparatus may be an apparatus that performs the image generating method. Therefore, different parameter value sets are configured according to different resolutions, and the display effect of the first image can be improved.
As for the manner of determination of the drawing parameter value in the second image, as an example, the drawing parameter value of the second image may be determined based on the second parameter setting instruction. The second parameter setting instruction may be an instruction generated based on a trigger operation by the user. For example, in a preset parameter setting interface, a user may set a drawing parameter value in the second image, so as to obtain the drawing parameter value of the second image, thereby implementing controllability of the parameter value. As another example, respective sets of parameter values are configured for different resolutions, and the sets of parameter values corresponding to the device resolutions are acquired, thereby obtaining values of rendering parameters of the second image. Among them, the apparatus may be an apparatus that performs the image generating method. Therefore, different parameter value sets are configured according to different resolutions, and the display effect of the second image can be improved.
In some examples, since there is a need to transmit the second image to other terminals whose resolutions are difficult to determine, in view of this, the drawing parameter value of the second image may be specified in advance, that is, the drawing parameter value of the second image is set to a fixed value, and different terminals may present the same second image.
For example, in a scenario of initiating a discussion, publishing a viewpoint, and the like, the image information carrying the second image may be sent to the server, and sent to the target terminal through the server. As an example, the image information carrying the second image may be sent to a server, where the image information is used to instruct the server to generate a URL (Uniform Resource Locator, or Uniform Resource Locator/location address, URL address, etc.) of the second image, and send the URL to a target terminal indicated by the image information.
As shown in fig. 1C, fig. 1C is a schematic diagram of information interaction shown in the present specification according to an exemplary embodiment. The first user terminal can edit a first image drawn by using the source data according to an editing instruction in an image editing state to obtain editing data; after receiving a confirmation generation instruction, drawing a second image based on the source data and the edit data; and then sending the image information carrying the second image to a server, wherein the drawing parameter values adopted for drawing the first image and the second image are different, so that the sizes of the two images are inconsistent. The server may generate a URL of the second image based on the received image information and transmit the URL to the second user terminal indicated by the image information. The second user terminal may parse the URL so that the second image may be viewed.
In addition, when the second image is drawn, all source data and edit data can be acquired from the first image, so that different sets of drawing parameters can be set according to requirements, the second image with different effects can be displayed, convenience is provided for later expansion through the mode of acquiring the source data and the edit data, and corresponding functions can be added based on the acquired source data and the edit data.
The various technical features in the above embodiments can be arbitrarily combined, so long as there is no conflict or contradiction between the combinations of the features, but the combination is limited by the space and is not described one by one, and therefore, any combination of the various technical features in the above embodiments also belongs to the scope disclosed in the present specification.
The following is an example of one of these.
As shown in fig. 2, fig. 2 is an application scene diagram of an image generation method shown in the present specification according to an exemplary embodiment. For the image of the added product object, a publisher page (such as a publishing discussion page, a publishing viewpoint page and a reply page) can be entered (step 201), trend graph data of the product is obtained based on an adding instruction of a user (step 202), a first drawing parameter value set of a drawing trend graph (big graph) is obtained (step 203), and the product trend graph (big graph) is drawn by using the first drawing parameter value set and the trend graph data (step 204). Label information of the product trend graph is added based on an editing instruction of a user (step 205), after a confirmation generation instruction is received, the trend graph data and the label information are acquired (step 206), a second drawing parameter value set of the trend graph (small graph) is acquired (step 207), and the product trend graph (small graph) is drawn by using the second drawing parameter value set, the trend graph data and the label information (step 208).
It can be seen that two different sets of parameter indexes are configured for the stock/fund trend graph, such as the font size of the related stock/fund information above the trend graph, the size of the trend graph display area, the font size of the numerical values inside the trend graph, the size of the labels added to the trend graph, the size of the characters inside the labels, the various related intervals for displaying the trend graph, and the like. The same data source and different configuration parameters are used for achieving the uniform and perfect visual effect, and then if tendency charts with different proportions are needed, the perfect visual effect can be achieved only by configuring drawing parameters.
The present specification also provides embodiments of an image generating apparatus and an electronic device applied thereto, corresponding to the embodiments of the image generating method described above.
The embodiment of the image generation device in the specification can be applied to computer equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of the computer device where the software implementation is located as a logical means. From a hardware aspect, as shown in fig. 3, which is a hardware structure diagram of a computer device in which the image generating apparatus of this specification is located, except for the processor 310, the memory 330, the network interface 320, and the nonvolatile memory 340 shown in fig. 3, the computer device in which the apparatus 331 is located in the embodiment may also include other hardware according to an actual function of the device, which is not described again.
As shown in fig. 4, fig. 4 is a block diagram of an image generation apparatus shown in the present specification according to an exemplary embodiment, the apparatus including:
and the editing module 41 is configured to edit the first image drawn by using the source data according to the editing instruction in the image editing state, so as to obtain editing data.
And a rendering module 42, configured to render the second image based on the source data and the edit data after receiving the confirmation generation instruction.
And the drawing parameter values adopted for drawing the first image and the second image are different, so that the sizes of the two images are inconsistent.
Optionally, the drawing parameter values include: one or more of a display area size of the image, a font size, a word spacing, a graphic scaling, a graphic spacing, a frame size of the editing data.
Optionally, the value of a rendering parameter for rendering the first image is determined based on a resolution of a device implementing the apparatus.
Optionally, a drawing parameter value for drawing the first image is determined based on the first parameter setting instruction.
Optionally, the drawing parameter value of the second image is a preset value.
Optionally, a drawing parameter value for drawing the second image is determined based on a second parameter setting instruction.
Optionally, the first/second image comprises a trend graph of the product object.
Optionally, the editing data includes editing content and position information used for determining a position of the editing content in the drawing board, and the drawing module 42 is specifically configured to: and determining the position of the editing content in the drawing board by using the position information in the editing data, and drawing the editing content at the position of the drawing board.
Optionally, the size of the first image is larger than the size of the second image.
Optionally, the apparatus further comprises (not shown in fig. 4):
and the information sending module is used for sending the image information carrying the second image to a server, wherein the image information is used for indicating the server to generate a URL of the second image and sending the URL to a target terminal indicated by the image information.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the embodiment of the present specification further provides a computer device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
editing a first image drawn by using source data according to an editing instruction in an image editing state to obtain editing data;
after receiving a confirmation generation instruction, drawing a second image based on the source data and the edit data;
and the drawing parameter values adopted for drawing the first image and the second image are different, so that the sizes of the two images are inconsistent.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Accordingly, embodiments of the present specification further provide a computer storage medium having program instructions stored therein, where the program instructions include:
editing a first image drawn by using source data according to an editing instruction in an image editing state to obtain editing data;
after receiving a confirmation generation instruction, drawing a second image based on the source data and the edit data;
and the drawing parameter values adopted for drawing the first image and the second image are different, so that the sizes of the two images are inconsistent.
Embodiments of the present description may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.